Sep 11 00:19:48.006950 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 22:25:29 -00 2025 Sep 11 00:19:48.006988 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:19:48.007009 kernel: BIOS-provided physical RAM map: Sep 11 00:19:48.007016 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 11 00:19:48.007023 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 11 00:19:48.007030 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 11 00:19:48.007038 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 11 00:19:48.007053 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 11 00:19:48.007060 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 11 00:19:48.007066 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 11 00:19:48.007073 kernel: NX (Execute Disable) protection: active Sep 11 00:19:48.007081 kernel: APIC: Static calls initialized Sep 11 00:19:48.007088 kernel: SMBIOS 2.8 present. Sep 11 00:19:48.007282 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 11 00:19:48.007296 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:19:48.007304 kernel: Hypervisor detected: KVM Sep 11 00:19:48.007319 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 11 00:19:48.007327 kernel: kvm-clock: using sched offset of 5093217730 cycles Sep 11 00:19:48.007336 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 11 00:19:48.007344 kernel: tsc: Detected 1999.999 MHz processor Sep 11 00:19:48.007352 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 11 00:19:48.007361 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 11 00:19:48.007372 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 11 00:19:48.007380 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 11 00:19:48.007388 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 11 00:19:48.007396 kernel: ACPI: Early table checksum verification disabled Sep 11 00:19:48.007404 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 11 00:19:48.007416 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:19:48.007430 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:19:48.007442 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:19:48.007455 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 11 00:19:48.007472 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:19:48.007484 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:19:48.007498 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:19:48.007509 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:19:48.007521 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 11 00:19:48.007532 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 11 00:19:48.007544 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 11 00:19:48.007556 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 11 00:19:48.007570 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 11 00:19:48.007592 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 11 00:19:48.007600 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 11 00:19:48.007609 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 11 00:19:48.007617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 11 00:19:48.007628 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Sep 11 00:19:48.007650 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Sep 11 00:19:48.007665 kernel: Zone ranges: Sep 11 00:19:48.007677 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 11 00:19:48.007689 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 11 00:19:48.007701 kernel: Normal empty Sep 11 00:19:48.007713 kernel: Device empty Sep 11 00:19:48.007725 kernel: Movable zone start for each node Sep 11 00:19:48.007737 kernel: Early memory node ranges Sep 11 00:19:48.007750 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 11 00:19:48.007762 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 11 00:19:48.007780 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 11 00:19:48.007793 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 11 00:19:48.007806 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 11 00:19:48.007820 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 11 00:19:48.007833 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 11 00:19:48.007848 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 11 00:19:48.007868 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 11 00:19:48.007881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 11 00:19:48.007899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 11 00:19:48.007916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 11 00:19:48.007933 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 11 00:19:48.007947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 11 00:19:48.007960 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 11 00:19:48.007971 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 11 00:19:48.007983 kernel: TSC deadline timer available Sep 11 00:19:48.007997 kernel: CPU topo: Max. logical packages: 1 Sep 11 00:19:48.008010 kernel: CPU topo: Max. logical dies: 1 Sep 11 00:19:48.008021 kernel: CPU topo: Max. dies per package: 1 Sep 11 00:19:48.008033 kernel: CPU topo: Max. threads per core: 1 Sep 11 00:19:48.008042 kernel: CPU topo: Num. cores per package: 2 Sep 11 00:19:48.008050 kernel: CPU topo: Num. threads per package: 2 Sep 11 00:19:48.008058 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 11 00:19:48.008067 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 11 00:19:48.008076 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 11 00:19:48.009458 kernel: Booting paravirtualized kernel on KVM Sep 11 00:19:48.009514 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 11 00:19:48.009524 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 11 00:19:48.009546 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 11 00:19:48.009554 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 11 00:19:48.009563 kernel: pcpu-alloc: [0] 0 1 Sep 11 00:19:48.009571 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 11 00:19:48.009583 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:19:48.009592 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:19:48.009601 kernel: random: crng init done Sep 11 00:19:48.009609 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:19:48.009621 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 11 00:19:48.009629 kernel: Fallback order for Node 0: 0 Sep 11 00:19:48.009637 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Sep 11 00:19:48.009646 kernel: Policy zone: DMA32 Sep 11 00:19:48.009654 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:19:48.009662 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 11 00:19:48.009670 kernel: Kernel/User page tables isolation: enabled Sep 11 00:19:48.009679 kernel: ftrace: allocating 40103 entries in 157 pages Sep 11 00:19:48.009687 kernel: ftrace: allocated 157 pages with 5 groups Sep 11 00:19:48.009699 kernel: Dynamic Preempt: voluntary Sep 11 00:19:48.009707 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:19:48.009717 kernel: rcu: RCU event tracing is enabled. Sep 11 00:19:48.009725 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 11 00:19:48.009734 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:19:48.009742 kernel: Rude variant of Tasks RCU enabled. Sep 11 00:19:48.009750 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:19:48.009759 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:19:48.009767 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 11 00:19:48.009777 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:19:48.009793 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:19:48.009802 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 11 00:19:48.009810 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 11 00:19:48.009819 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:19:48.009827 kernel: Console: colour VGA+ 80x25 Sep 11 00:19:48.009835 kernel: printk: legacy console [tty0] enabled Sep 11 00:19:48.009843 kernel: printk: legacy console [ttyS0] enabled Sep 11 00:19:48.009852 kernel: ACPI: Core revision 20240827 Sep 11 00:19:48.009863 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 11 00:19:48.009881 kernel: APIC: Switch to symmetric I/O mode setup Sep 11 00:19:48.009890 kernel: x2apic enabled Sep 11 00:19:48.009901 kernel: APIC: Switched APIC routing to: physical x2apic Sep 11 00:19:48.009910 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 11 00:19:48.009924 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Sep 11 00:19:48.009932 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Sep 11 00:19:48.009941 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 11 00:19:48.009950 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 11 00:19:48.009959 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 11 00:19:48.009972 kernel: Spectre V2 : Mitigation: Retpolines Sep 11 00:19:48.009981 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 11 00:19:48.009990 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 11 00:19:48.009999 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 11 00:19:48.010013 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 11 00:19:48.010025 kernel: MDS: Mitigation: Clear CPU buffers Sep 11 00:19:48.010039 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 11 00:19:48.010058 kernel: active return thunk: its_return_thunk Sep 11 00:19:48.010074 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 11 00:19:48.010899 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 11 00:19:48.010937 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 11 00:19:48.010947 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 11 00:19:48.010956 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 11 00:19:48.010966 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 11 00:19:48.010976 kernel: Freeing SMP alternatives memory: 32K Sep 11 00:19:48.010985 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:19:48.011003 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:19:48.011012 kernel: landlock: Up and running. Sep 11 00:19:48.011021 kernel: SELinux: Initializing. Sep 11 00:19:48.011030 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 11 00:19:48.011039 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 11 00:19:48.011048 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 11 00:19:48.011059 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 11 00:19:48.011073 kernel: signal: max sigframe size: 1776 Sep 11 00:19:48.011106 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:19:48.011217 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:19:48.011236 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:19:48.011249 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 11 00:19:48.011261 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:19:48.011284 kernel: smpboot: x86: Booting SMP configuration: Sep 11 00:19:48.011301 kernel: .... node #0, CPUs: #1 Sep 11 00:19:48.011315 kernel: smp: Brought up 1 node, 2 CPUs Sep 11 00:19:48.011330 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Sep 11 00:19:48.011344 kernel: Memory: 1968964K/2096612K available (14336K kernel code, 2429K rwdata, 9960K rodata, 53832K init, 1088K bss, 123092K reserved, 0K cma-reserved) Sep 11 00:19:48.011365 kernel: devtmpfs: initialized Sep 11 00:19:48.011379 kernel: x86/mm: Memory block size: 128MB Sep 11 00:19:48.011393 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:19:48.011407 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 11 00:19:48.011421 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:19:48.011435 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:19:48.011449 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:19:48.011465 kernel: audit: type=2000 audit(1757549984.295:1): state=initialized audit_enabled=0 res=1 Sep 11 00:19:48.011485 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:19:48.011500 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 11 00:19:48.011514 kernel: cpuidle: using governor menu Sep 11 00:19:48.011528 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:19:48.011543 kernel: dca service started, version 1.12.1 Sep 11 00:19:48.011558 kernel: PCI: Using configuration type 1 for base access Sep 11 00:19:48.011574 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 11 00:19:48.011589 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:19:48.011604 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:19:48.011624 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:19:48.011638 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:19:48.011649 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:19:48.011658 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:19:48.011668 kernel: ACPI: Interpreter enabled Sep 11 00:19:48.011677 kernel: ACPI: PM: (supports S0 S5) Sep 11 00:19:48.011687 kernel: ACPI: Using IOAPIC for interrupt routing Sep 11 00:19:48.011697 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 11 00:19:48.011706 kernel: PCI: Using E820 reservations for host bridge windows Sep 11 00:19:48.011716 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 11 00:19:48.011729 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:19:48.014076 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:19:48.014414 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 11 00:19:48.014596 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 11 00:19:48.014621 kernel: acpiphp: Slot [3] registered Sep 11 00:19:48.014636 kernel: acpiphp: Slot [4] registered Sep 11 00:19:48.014653 kernel: acpiphp: Slot [5] registered Sep 11 00:19:48.014681 kernel: acpiphp: Slot [6] registered Sep 11 00:19:48.014697 kernel: acpiphp: Slot [7] registered Sep 11 00:19:48.014713 kernel: acpiphp: Slot [8] registered Sep 11 00:19:48.014729 kernel: acpiphp: Slot [9] registered Sep 11 00:19:48.014744 kernel: acpiphp: Slot [10] registered Sep 11 00:19:48.014759 kernel: acpiphp: Slot [11] registered Sep 11 00:19:48.014773 kernel: acpiphp: Slot [12] registered Sep 11 00:19:48.014788 kernel: acpiphp: Slot [13] registered Sep 11 00:19:48.014803 kernel: acpiphp: Slot [14] registered Sep 11 00:19:48.014823 kernel: acpiphp: Slot [15] registered Sep 11 00:19:48.014838 kernel: acpiphp: Slot [16] registered Sep 11 00:19:48.014853 kernel: acpiphp: Slot [17] registered Sep 11 00:19:48.014867 kernel: acpiphp: Slot [18] registered Sep 11 00:19:48.014882 kernel: acpiphp: Slot [19] registered Sep 11 00:19:48.014897 kernel: acpiphp: Slot [20] registered Sep 11 00:19:48.014911 kernel: acpiphp: Slot [21] registered Sep 11 00:19:48.014926 kernel: acpiphp: Slot [22] registered Sep 11 00:19:48.014940 kernel: acpiphp: Slot [23] registered Sep 11 00:19:48.014955 kernel: acpiphp: Slot [24] registered Sep 11 00:19:48.014976 kernel: acpiphp: Slot [25] registered Sep 11 00:19:48.014985 kernel: acpiphp: Slot [26] registered Sep 11 00:19:48.014995 kernel: acpiphp: Slot [27] registered Sep 11 00:19:48.015005 kernel: acpiphp: Slot [28] registered Sep 11 00:19:48.015014 kernel: acpiphp: Slot [29] registered Sep 11 00:19:48.015023 kernel: acpiphp: Slot [30] registered Sep 11 00:19:48.015032 kernel: acpiphp: Slot [31] registered Sep 11 00:19:48.015042 kernel: PCI host bridge to bus 0000:00 Sep 11 00:19:48.015306 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 11 00:19:48.015472 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 11 00:19:48.015603 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 11 00:19:48.015697 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 11 00:19:48.015787 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 11 00:19:48.015876 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:19:48.016053 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:19:48.016285 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 11 00:19:48.016421 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Sep 11 00:19:48.016558 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Sep 11 00:19:48.016665 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 11 00:19:48.016771 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 11 00:19:48.016876 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 11 00:19:48.016982 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 11 00:19:48.017249 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Sep 11 00:19:48.021138 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Sep 11 00:19:48.021328 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 11 00:19:48.021489 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 11 00:19:48.021653 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 11 00:19:48.021860 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Sep 11 00:19:48.022036 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Sep 11 00:19:48.022256 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Sep 11 00:19:48.022379 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Sep 11 00:19:48.022511 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Sep 11 00:19:48.022664 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 11 00:19:48.022838 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:19:48.022954 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Sep 11 00:19:48.023180 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Sep 11 00:19:48.023359 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Sep 11 00:19:48.023560 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 11 00:19:48.023735 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Sep 11 00:19:48.023887 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Sep 11 00:19:48.024044 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 11 00:19:48.025389 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:19:48.025586 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Sep 11 00:19:48.025755 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Sep 11 00:19:48.025865 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 11 00:19:48.026043 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:19:48.026222 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Sep 11 00:19:48.026336 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Sep 11 00:19:48.026503 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Sep 11 00:19:48.026652 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 11 00:19:48.026783 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Sep 11 00:19:48.026931 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Sep 11 00:19:48.027069 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Sep 11 00:19:48.029436 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:19:48.029599 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Sep 11 00:19:48.029714 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 11 00:19:48.029727 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 11 00:19:48.029737 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 11 00:19:48.029747 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 11 00:19:48.029756 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 11 00:19:48.029766 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 11 00:19:48.029775 kernel: iommu: Default domain type: Translated Sep 11 00:19:48.029784 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 11 00:19:48.029798 kernel: PCI: Using ACPI for IRQ routing Sep 11 00:19:48.029807 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 11 00:19:48.029822 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 11 00:19:48.029835 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 11 00:19:48.029975 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 11 00:19:48.030168 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 11 00:19:48.030278 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 11 00:19:48.030296 kernel: vgaarb: loaded Sep 11 00:19:48.030319 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 11 00:19:48.030328 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 11 00:19:48.030337 kernel: clocksource: Switched to clocksource kvm-clock Sep 11 00:19:48.030347 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:19:48.030357 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:19:48.030366 kernel: pnp: PnP ACPI init Sep 11 00:19:48.030375 kernel: pnp: PnP ACPI: found 4 devices Sep 11 00:19:48.030384 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 11 00:19:48.030393 kernel: NET: Registered PF_INET protocol family Sep 11 00:19:48.030458 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:19:48.030469 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 11 00:19:48.030479 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:19:48.030488 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 11 00:19:48.030497 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 11 00:19:48.030506 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 11 00:19:48.030515 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 11 00:19:48.030525 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 11 00:19:48.030534 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:19:48.030546 kernel: NET: Registered PF_XDP protocol family Sep 11 00:19:48.030657 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 11 00:19:48.030746 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 11 00:19:48.030832 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 11 00:19:48.030934 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 11 00:19:48.031066 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 11 00:19:48.034418 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 11 00:19:48.034620 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 11 00:19:48.034653 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 11 00:19:48.034808 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 29318 usecs Sep 11 00:19:48.034831 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:19:48.034849 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 11 00:19:48.034866 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Sep 11 00:19:48.034882 kernel: Initialise system trusted keyrings Sep 11 00:19:48.034899 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 11 00:19:48.034915 kernel: Key type asymmetric registered Sep 11 00:19:48.034930 kernel: Asymmetric key parser 'x509' registered Sep 11 00:19:48.035034 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 11 00:19:48.035051 kernel: io scheduler mq-deadline registered Sep 11 00:19:48.035068 kernel: io scheduler kyber registered Sep 11 00:19:48.035084 kernel: io scheduler bfq registered Sep 11 00:19:48.035145 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 11 00:19:48.035163 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 11 00:19:48.035180 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 11 00:19:48.035196 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 11 00:19:48.035212 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:19:48.035234 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 11 00:19:48.035249 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 11 00:19:48.035265 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 11 00:19:48.035281 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 11 00:19:48.035619 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 11 00:19:48.035645 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 11 00:19:48.035787 kernel: rtc_cmos 00:03: registered as rtc0 Sep 11 00:19:48.035922 kernel: rtc_cmos 00:03: setting system clock to 2025-09-11T00:19:47 UTC (1757549987) Sep 11 00:19:48.036072 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 11 00:19:48.037903 kernel: intel_pstate: CPU model not supported Sep 11 00:19:48.037933 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:19:48.037943 kernel: Segment Routing with IPv6 Sep 11 00:19:48.037952 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:19:48.037961 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:19:48.037970 kernel: Key type dns_resolver registered Sep 11 00:19:48.037981 kernel: IPI shorthand broadcast: enabled Sep 11 00:19:48.037990 kernel: sched_clock: Marking stable (4171007887, 161048990)->(4366403485, -34346608) Sep 11 00:19:48.038005 kernel: registered taskstats version 1 Sep 11 00:19:48.038014 kernel: Loading compiled-in X.509 certificates Sep 11 00:19:48.038023 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 8138ce5002a1b572fd22b23ac238f29bab3f249f' Sep 11 00:19:48.038032 kernel: Demotion targets for Node 0: null Sep 11 00:19:48.038040 kernel: Key type .fscrypt registered Sep 11 00:19:48.038049 kernel: Key type fscrypt-provisioning registered Sep 11 00:19:48.038078 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:19:48.038113 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:19:48.038127 kernel: ima: No architecture policies found Sep 11 00:19:48.038136 kernel: clk: Disabling unused clocks Sep 11 00:19:48.038145 kernel: Warning: unable to open an initial console. Sep 11 00:19:48.038154 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 11 00:19:48.038164 kernel: Write protecting the kernel read-only data: 24576k Sep 11 00:19:48.038173 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 11 00:19:48.038182 kernel: Run /init as init process Sep 11 00:19:48.038191 kernel: with arguments: Sep 11 00:19:48.038200 kernel: /init Sep 11 00:19:48.038213 kernel: with environment: Sep 11 00:19:48.038222 kernel: HOME=/ Sep 11 00:19:48.038230 kernel: TERM=linux Sep 11 00:19:48.038239 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:19:48.038251 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:19:48.038266 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:19:48.038277 systemd[1]: Detected virtualization kvm. Sep 11 00:19:48.038287 systemd[1]: Detected architecture x86-64. Sep 11 00:19:48.038299 systemd[1]: Running in initrd. Sep 11 00:19:48.038309 systemd[1]: No hostname configured, using default hostname. Sep 11 00:19:48.038319 systemd[1]: Hostname set to . Sep 11 00:19:48.038329 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:19:48.038339 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:19:48.038349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:19:48.038359 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:19:48.038370 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:19:48.038383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:19:48.038398 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:19:48.038420 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:19:48.038433 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:19:48.038445 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:19:48.038455 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:19:48.038465 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:19:48.038474 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:19:48.038484 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:19:48.038496 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:19:48.038512 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:19:48.038522 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:19:48.038536 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:19:48.038546 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:19:48.038556 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:19:48.038565 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:19:48.038575 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:19:48.038584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:19:48.038594 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:19:48.038603 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:19:48.038612 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:19:48.038625 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:19:48.038635 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:19:48.038644 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:19:48.038654 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:19:48.038664 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:19:48.038673 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:19:48.038683 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:19:48.038696 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:19:48.038758 systemd-journald[212]: Collecting audit messages is disabled. Sep 11 00:19:48.038788 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:19:48.038798 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:19:48.038809 systemd-journald[212]: Journal started Sep 11 00:19:48.038833 systemd-journald[212]: Runtime Journal (/run/log/journal/2f4cb8cdd7c94cbd917b5a02a4aeb39e) is 4.9M, max 39.6M, 34.6M free. Sep 11 00:19:48.011168 systemd-modules-load[213]: Inserted module 'overlay' Sep 11 00:19:48.042168 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:19:48.053382 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:19:48.109731 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:19:48.109790 kernel: Bridge firewalling registered Sep 11 00:19:48.072601 systemd-modules-load[213]: Inserted module 'br_netfilter' Sep 11 00:19:48.111831 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:19:48.118446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:19:48.119892 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:19:48.126589 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:19:48.129345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:19:48.133936 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:19:48.134307 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:19:48.144616 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:19:48.164910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:19:48.169008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:19:48.173278 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:19:48.178179 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:19:48.182313 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:19:48.210131 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=24178014e7d1a618b6c727661dc98ca9324f7f5aeefcaa5f4996d4d839e6e63a Sep 11 00:19:48.229178 systemd-resolved[247]: Positive Trust Anchors: Sep 11 00:19:48.230224 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:19:48.230281 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:19:48.238665 systemd-resolved[247]: Defaulting to hostname 'linux'. Sep 11 00:19:48.242709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:19:48.244601 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:19:48.351154 kernel: SCSI subsystem initialized Sep 11 00:19:48.367245 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:19:48.386156 kernel: iscsi: registered transport (tcp) Sep 11 00:19:48.419736 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:19:48.419834 kernel: QLogic iSCSI HBA Driver Sep 11 00:19:48.458631 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:19:48.489506 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:19:48.493031 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:19:48.567371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:19:48.570471 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:19:48.646226 kernel: raid6: avx2x4 gen() 23992 MB/s Sep 11 00:19:48.662181 kernel: raid6: avx2x2 gen() 20882 MB/s Sep 11 00:19:48.680570 kernel: raid6: avx2x1 gen() 13825 MB/s Sep 11 00:19:48.680690 kernel: raid6: using algorithm avx2x4 gen() 23992 MB/s Sep 11 00:19:48.699680 kernel: raid6: .... xor() 6289 MB/s, rmw enabled Sep 11 00:19:48.699829 kernel: raid6: using avx2x2 recovery algorithm Sep 11 00:19:48.737169 kernel: xor: automatically using best checksumming function avx Sep 11 00:19:48.925297 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:19:48.936365 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:19:48.941338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:19:48.975636 systemd-udevd[459]: Using default interface naming scheme 'v255'. Sep 11 00:19:48.982597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:19:48.988564 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:19:49.021566 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 11 00:19:49.060498 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:19:49.063548 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:19:49.151701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:19:49.157445 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:19:49.238141 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 11 00:19:49.244145 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Sep 11 00:19:49.254230 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 11 00:19:49.257270 kernel: scsi host0: Virtio SCSI HBA Sep 11 00:19:49.268544 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:19:49.268639 kernel: GPT:9289727 != 125829119 Sep 11 00:19:49.268668 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:19:49.271667 kernel: GPT:9289727 != 125829119 Sep 11 00:19:49.273437 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:19:49.273510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:19:49.301122 kernel: cryptd: max_cpu_qlen set to 1000 Sep 11 00:19:49.340771 kernel: ACPI: bus type USB registered Sep 11 00:19:49.340870 kernel: usbcore: registered new interface driver usbfs Sep 11 00:19:49.340883 kernel: usbcore: registered new interface driver hub Sep 11 00:19:49.340894 kernel: usbcore: registered new device driver usb Sep 11 00:19:49.355151 kernel: AES CTR mode by8 optimization enabled Sep 11 00:19:49.359881 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 11 00:19:49.360256 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 11 00:19:49.360767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:19:49.362384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:19:49.365362 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:19:49.369070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:19:49.371745 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:19:49.380163 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 11 00:19:49.383953 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 11 00:19:49.386176 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 11 00:19:49.386433 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 11 00:19:49.399560 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 11 00:19:49.399663 kernel: hub 1-0:1.0: USB hub found Sep 11 00:19:49.399948 kernel: hub 1-0:1.0: 2 ports detected Sep 11 00:19:49.445260 kernel: libata version 3.00 loaded. Sep 11 00:19:49.453159 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 11 00:19:49.461128 kernel: scsi host1: ata_piix Sep 11 00:19:49.468140 kernel: scsi host2: ata_piix Sep 11 00:19:49.475569 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Sep 11 00:19:49.475699 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Sep 11 00:19:49.516943 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:19:49.551000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:19:49.567664 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:19:49.576781 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:19:49.577518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:19:49.589045 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:19:49.591598 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:19:49.616152 disk-uuid[610]: Primary Header is updated. Sep 11 00:19:49.616152 disk-uuid[610]: Secondary Entries is updated. Sep 11 00:19:49.616152 disk-uuid[610]: Secondary Header is updated. Sep 11 00:19:49.624500 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:19:49.629157 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:19:49.800223 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:19:49.853119 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:19:49.854080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:19:49.855806 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:19:49.859032 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:19:49.907807 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:19:50.641898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:19:50.645072 disk-uuid[611]: The operation has completed successfully. Sep 11 00:19:50.709863 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:19:50.711552 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:19:50.780675 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:19:50.804087 sh[635]: Success Sep 11 00:19:50.834273 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:19:50.834391 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:19:50.835826 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:19:50.851239 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 11 00:19:50.930515 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:19:50.932786 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:19:50.943811 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:19:50.967413 kernel: BTRFS: device fsid f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (647) Sep 11 00:19:50.970551 kernel: BTRFS info (device dm-0): first mount of filesystem f1eb5eb7-34cc-49c0-9f2b-e603bd772d66 Sep 11 00:19:50.970698 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:19:50.982833 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:19:50.982942 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:19:50.985700 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:19:50.987629 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:19:50.988507 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:19:50.989470 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:19:50.995376 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:19:51.033156 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (678) Sep 11 00:19:51.037426 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:19:51.037510 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:19:51.046162 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:19:51.046274 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:19:51.059160 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:19:51.060725 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:19:51.063372 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:19:51.199340 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:19:51.203346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:19:51.289767 systemd-networkd[816]: lo: Link UP Sep 11 00:19:51.289782 systemd-networkd[816]: lo: Gained carrier Sep 11 00:19:51.296809 systemd-networkd[816]: Enumeration completed Sep 11 00:19:51.297317 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:19:51.297439 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 11 00:19:51.297444 systemd-networkd[816]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 11 00:19:51.298375 systemd-networkd[816]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:19:51.298379 systemd-networkd[816]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:19:51.299414 systemd-networkd[816]: eth0: Link UP Sep 11 00:19:51.299812 systemd-networkd[816]: eth1: Link UP Sep 11 00:19:51.300256 systemd-networkd[816]: eth0: Gained carrier Sep 11 00:19:51.300278 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 11 00:19:51.306397 systemd-networkd[816]: eth1: Gained carrier Sep 11 00:19:51.306430 systemd-networkd[816]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:19:51.307936 systemd[1]: Reached target network.target - Network. Sep 11 00:19:51.319219 systemd-networkd[816]: eth0: DHCPv4 address 137.184.47.128/20, gateway 137.184.32.1 acquired from 169.254.169.253 Sep 11 00:19:51.335388 systemd-networkd[816]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Sep 11 00:19:51.367705 ignition[717]: Ignition 2.21.0 Sep 11 00:19:51.367727 ignition[717]: Stage: fetch-offline Sep 11 00:19:51.367807 ignition[717]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:19:51.367824 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 11 00:19:51.371354 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:19:51.367994 ignition[717]: parsed url from cmdline: "" Sep 11 00:19:51.368000 ignition[717]: no config URL provided Sep 11 00:19:51.368010 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:19:51.368023 ignition[717]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:19:51.375781 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 11 00:19:51.368033 ignition[717]: failed to fetch config: resource requires networking Sep 11 00:19:51.368363 ignition[717]: Ignition finished successfully Sep 11 00:19:51.422203 ignition[826]: Ignition 2.21.0 Sep 11 00:19:51.422229 ignition[826]: Stage: fetch Sep 11 00:19:51.422498 ignition[826]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:19:51.422514 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 11 00:19:51.422658 ignition[826]: parsed url from cmdline: "" Sep 11 00:19:51.422665 ignition[826]: no config URL provided Sep 11 00:19:51.422674 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:19:51.422686 ignition[826]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:19:51.422747 ignition[826]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 11 00:19:51.441268 ignition[826]: GET result: OK Sep 11 00:19:51.441486 ignition[826]: parsing config with SHA512: 82e377b9685f33bebd4248d7ef47f54b05eadbdbc268bb300f8e3d76dc17f29c97829d2a8da94ce5d5ed41858fa7945956339234860e4b9df11937ee00495a39 Sep 11 00:19:51.448750 unknown[826]: fetched base config from "system" Sep 11 00:19:51.448769 unknown[826]: fetched base config from "system" Sep 11 00:19:51.449524 ignition[826]: fetch: fetch complete Sep 11 00:19:51.448780 unknown[826]: fetched user config from "digitalocean" Sep 11 00:19:51.449534 ignition[826]: fetch: fetch passed Sep 11 00:19:51.449628 ignition[826]: Ignition finished successfully Sep 11 00:19:51.454362 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 11 00:19:51.462535 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:19:51.513611 ignition[833]: Ignition 2.21.0 Sep 11 00:19:51.514894 ignition[833]: Stage: kargs Sep 11 00:19:51.515279 ignition[833]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:19:51.515298 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 11 00:19:51.524803 ignition[833]: kargs: kargs passed Sep 11 00:19:51.525059 ignition[833]: Ignition finished successfully Sep 11 00:19:51.528706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:19:51.533460 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:19:51.586023 ignition[839]: Ignition 2.21.0 Sep 11 00:19:51.586054 ignition[839]: Stage: disks Sep 11 00:19:51.586389 ignition[839]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:19:51.586408 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 11 00:19:51.590913 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:19:51.587518 ignition[839]: disks: disks passed Sep 11 00:19:51.587593 ignition[839]: Ignition finished successfully Sep 11 00:19:51.593538 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:19:51.595873 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:19:51.597419 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:19:51.598924 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:19:51.600917 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:19:51.605354 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:19:51.656522 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:19:51.661510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:19:51.665930 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:19:51.857123 kernel: EXT4-fs (vda9): mounted filesystem 6a9ce0af-81d0-4628-9791-e47488ed2744 r/w with ordered data mode. Quota mode: none. Sep 11 00:19:51.858028 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:19:51.859434 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:19:51.862227 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:19:51.864253 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:19:51.869402 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Sep 11 00:19:51.873324 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 11 00:19:51.874310 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:19:51.874440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:19:51.887302 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:19:51.889126 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:19:51.923169 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (855) Sep 11 00:19:51.930750 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:19:51.930855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:19:51.938142 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:19:51.938255 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:19:51.943891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:19:52.002775 coreos-metadata[857]: Sep 11 00:19:52.002 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 11 00:19:52.008628 initrd-setup-root[886]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:19:52.017596 coreos-metadata[857]: Sep 11 00:19:52.016 INFO Fetch successful Sep 11 00:19:52.029706 initrd-setup-root[893]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:19:52.031780 coreos-metadata[858]: Sep 11 00:19:52.031 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 11 00:19:52.032186 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Sep 11 00:19:52.033793 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Sep 11 00:19:52.043203 initrd-setup-root[901]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:19:52.045754 coreos-metadata[858]: Sep 11 00:19:52.045 INFO Fetch successful Sep 11 00:19:52.052686 initrd-setup-root[908]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:19:52.055299 coreos-metadata[858]: Sep 11 00:19:52.055 INFO wrote hostname ci-4372.1.0-n-d6d7f926f9 to /sysroot/etc/hostname Sep 11 00:19:52.057577 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 11 00:19:52.196867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:19:52.200282 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:19:52.203348 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:19:52.226558 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:19:52.228117 kernel: BTRFS info (device vda6): last unmount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:19:52.256293 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:19:52.267307 ignition[977]: INFO : Ignition 2.21.0 Sep 11 00:19:52.267307 ignition[977]: INFO : Stage: mount Sep 11 00:19:52.270265 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:19:52.270265 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 11 00:19:52.272885 ignition[977]: INFO : mount: mount passed Sep 11 00:19:52.274361 ignition[977]: INFO : Ignition finished successfully Sep 11 00:19:52.277244 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:19:52.279712 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:19:52.305990 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:19:52.342178 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (989) Sep 11 00:19:52.345568 kernel: BTRFS info (device vda6): first mount of filesystem a5de7b5e-e14d-4c62-883d-af7ea22fae7e Sep 11 00:19:52.345681 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 11 00:19:52.351508 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:19:52.351641 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:19:52.354476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:19:52.397819 ignition[1006]: INFO : Ignition 2.21.0 Sep 11 00:19:52.397819 ignition[1006]: INFO : Stage: files Sep 11 00:19:52.399587 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:19:52.399587 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 11 00:19:52.401745 ignition[1006]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:19:52.404514 ignition[1006]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:19:52.404514 ignition[1006]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:19:52.409142 ignition[1006]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:19:52.410188 ignition[1006]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:19:52.411394 unknown[1006]: wrote ssh authorized keys file for user: core Sep 11 00:19:52.413274 ignition[1006]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:19:52.414677 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:19:52.416181 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 11 00:19:52.475365 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:19:53.064308 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 11 00:19:53.064308 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:19:53.067702 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:19:53.067702 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:19:53.067702 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:19:53.067702 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:19:53.067702 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:19:53.067702 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:19:53.067702 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:19:53.079738 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:19:53.079738 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:19:53.079738 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:19:53.079738 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:19:53.079738 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:19:53.079738 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 11 00:19:53.079930 systemd-networkd[816]: eth1: Gained IPv6LL Sep 11 00:19:53.207786 systemd-networkd[816]: eth0: Gained IPv6LL Sep 11 00:19:53.575488 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:19:55.966835 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 11 00:19:55.966835 ignition[1006]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:19:55.970190 ignition[1006]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:19:55.974272 ignition[1006]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:19:55.974272 ignition[1006]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:19:55.974272 ignition[1006]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:19:55.974272 ignition[1006]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:19:55.974272 ignition[1006]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:19:55.974272 ignition[1006]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:19:55.974272 ignition[1006]: INFO : files: files passed Sep 11 00:19:55.974272 ignition[1006]: INFO : Ignition finished successfully Sep 11 00:19:55.976027 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:19:55.980258 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:19:55.985453 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:19:56.004787 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:19:56.004966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:19:56.013866 initrd-setup-root-after-ignition[1036]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:19:56.015658 initrd-setup-root-after-ignition[1040]: grep: Sep 11 00:19:56.016423 initrd-setup-root-after-ignition[1040]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:19:56.017326 initrd-setup-root-after-ignition[1036]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:19:56.017668 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:19:56.019942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:19:56.022664 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:19:56.087629 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:19:56.087810 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:19:56.090159 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:19:56.091779 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:19:56.092658 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:19:56.095320 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:19:56.137714 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:19:56.142430 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:19:56.180866 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:19:56.183212 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:19:56.184248 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:19:56.186006 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:19:56.186272 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:19:56.187910 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:19:56.191662 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:19:56.193845 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:19:56.195979 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:19:56.198133 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:19:56.199038 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:19:56.201298 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:19:56.202121 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:19:56.204216 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:19:56.205918 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:19:56.206694 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:19:56.207464 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:19:56.207675 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:19:56.210393 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:19:56.211661 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:19:56.213298 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:19:56.213520 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:19:56.219622 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:19:56.219860 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:19:56.221065 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:19:56.221311 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:19:56.222262 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:19:56.222423 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:19:56.223842 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 11 00:19:56.224015 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 11 00:19:56.233400 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:19:56.241414 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:19:56.252654 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:19:56.252959 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:19:56.256386 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:19:56.256601 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:19:56.275821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:19:56.275992 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:19:56.300621 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:19:56.313802 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:19:56.314002 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:19:56.320002 ignition[1060]: INFO : Ignition 2.21.0 Sep 11 00:19:56.320002 ignition[1060]: INFO : Stage: umount Sep 11 00:19:56.322141 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:19:56.322141 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 11 00:19:56.325808 ignition[1060]: INFO : umount: umount passed Sep 11 00:19:56.325808 ignition[1060]: INFO : Ignition finished successfully Sep 11 00:19:56.327636 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:19:56.327927 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:19:56.331049 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:19:56.331542 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:19:56.333307 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:19:56.333826 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:19:56.335474 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 11 00:19:56.335550 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 11 00:19:56.336837 systemd[1]: Stopped target network.target - Network. Sep 11 00:19:56.338317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:19:56.338400 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:19:56.339808 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:19:56.341180 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:19:56.341277 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:19:56.342833 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:19:56.344346 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:19:56.345635 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:19:56.345876 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:19:56.350456 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:19:56.350528 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:19:56.352210 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:19:56.352320 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:19:56.353689 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:19:56.353758 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:19:56.355181 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:19:56.355258 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:19:56.356632 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:19:56.358121 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:19:56.364381 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:19:56.364571 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:19:56.370870 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:19:56.371445 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:19:56.371624 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:19:56.374477 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:19:56.375447 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:19:56.376696 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:19:56.376763 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:19:56.380262 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:19:56.380884 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:19:56.380980 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:19:56.381759 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:19:56.381829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:19:56.386391 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:19:56.386504 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:19:56.388281 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:19:56.388381 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:19:56.390050 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:19:56.394800 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:19:56.394914 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:19:56.408250 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:19:56.408545 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:19:56.409795 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:19:56.409860 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:19:56.411333 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:19:56.411404 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:19:56.412863 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:19:56.412945 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:19:56.415081 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:19:56.415261 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:19:56.416547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:19:56.416631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:19:56.419404 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:19:56.421028 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:19:56.421150 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:19:56.425031 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:19:56.425137 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:19:56.427624 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 11 00:19:56.427686 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:19:56.428763 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:19:56.428812 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:19:56.430254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:19:56.430313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:19:56.433044 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:19:56.433173 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 11 00:19:56.433230 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:19:56.433288 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:19:56.433819 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:19:56.433951 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:19:56.443648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:19:56.443773 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:19:56.446228 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:19:56.454330 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:19:56.488897 systemd[1]: Switching root. Sep 11 00:19:56.528473 systemd-journald[212]: Journal stopped Sep 11 00:19:58.220134 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Sep 11 00:19:58.220216 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:19:58.220233 kernel: SELinux: policy capability open_perms=1 Sep 11 00:19:58.220244 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:19:58.220258 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:19:58.220269 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:19:58.220281 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:19:58.220309 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:19:58.220322 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:19:58.220333 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:19:58.220344 kernel: audit: type=1403 audit(1757549996.804:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:19:58.220357 systemd[1]: Successfully loaded SELinux policy in 57.237ms. Sep 11 00:19:58.220382 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.364ms. Sep 11 00:19:58.220397 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:19:58.220409 systemd[1]: Detected virtualization kvm. Sep 11 00:19:58.220427 systemd[1]: Detected architecture x86-64. Sep 11 00:19:58.220439 systemd[1]: Detected first boot. Sep 11 00:19:58.220452 systemd[1]: Hostname set to . Sep 11 00:19:58.220464 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:19:58.220475 zram_generator::config[1104]: No configuration found. Sep 11 00:19:58.220493 kernel: Guest personality initialized and is inactive Sep 11 00:19:58.220511 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 11 00:19:58.220528 kernel: Initialized host personality Sep 11 00:19:58.220546 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:19:58.220572 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:19:58.220592 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:19:58.220605 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:19:58.220618 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:19:58.220633 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:19:58.220645 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:19:58.220657 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:19:58.220669 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:19:58.220691 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:19:58.220704 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:19:58.220716 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:19:58.220728 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:19:58.220739 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:19:58.220751 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:19:58.220763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:19:58.220774 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:19:58.220793 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:19:58.220805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:19:58.220817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:19:58.220829 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 11 00:19:58.220841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:19:58.220853 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:19:58.220867 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:19:58.220885 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:19:58.220897 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:19:58.220909 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:19:58.220920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:19:58.220936 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:19:58.220947 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:19:58.220959 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:19:58.220970 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:19:58.220982 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:19:58.220999 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:19:58.221011 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:19:58.221023 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:19:58.221034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:19:58.221046 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:19:58.221058 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:19:58.221070 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:19:58.221081 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:19:58.221108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:19:58.221126 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:19:58.221138 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:19:58.221162 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:19:58.221176 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:19:58.221200 systemd[1]: Reached target machines.target - Containers. Sep 11 00:19:58.221220 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:19:58.221237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:19:58.221257 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:19:58.221275 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:19:58.221304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:19:58.221321 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:19:58.221340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:19:58.221359 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:19:58.221380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:19:58.221395 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:19:58.221407 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:19:58.221419 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:19:58.221439 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:19:58.221452 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:19:58.221470 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:19:58.221483 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:19:58.221495 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:19:58.221513 kernel: loop: module loaded Sep 11 00:19:58.221536 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:19:58.221549 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:19:58.221564 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:19:58.221585 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:19:58.221613 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:19:58.221632 systemd[1]: Stopped verity-setup.service. Sep 11 00:19:58.221650 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:19:58.221665 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:19:58.221682 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:19:58.221700 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:19:58.221719 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:19:58.221738 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:19:58.221751 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:19:58.221772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:19:58.221783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:19:58.221795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:19:58.221808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:19:58.221819 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:19:58.221831 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:19:58.221843 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:19:58.221854 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:19:58.221874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:19:58.221893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:19:58.221905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:19:58.221917 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:19:58.221929 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:19:58.221942 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:19:58.221953 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:19:58.221964 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:19:58.221975 kernel: ACPI: bus type drm_connector registered Sep 11 00:19:58.221993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:19:58.222011 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:19:58.222023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:19:58.222035 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:19:58.222047 kernel: fuse: init (API version 7.41) Sep 11 00:19:58.222059 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:19:58.222077 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:19:58.222125 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:19:58.222138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:19:58.222214 systemd-journald[1178]: Collecting audit messages is disabled. Sep 11 00:19:58.222241 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:19:58.222256 systemd-journald[1178]: Journal started Sep 11 00:19:58.222288 systemd-journald[1178]: Runtime Journal (/run/log/journal/2f4cb8cdd7c94cbd917b5a02a4aeb39e) is 4.9M, max 39.6M, 34.6M free. Sep 11 00:19:57.681445 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:19:57.705947 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:19:57.706750 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:19:58.227687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:19:58.230131 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:19:58.244507 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:19:58.250365 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:19:58.251845 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:19:58.253571 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:19:58.255660 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:19:58.322411 kernel: loop0: detected capacity change from 0 to 8 Sep 11 00:19:58.313736 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:19:58.318322 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:19:58.319436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:19:58.354129 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:19:58.361228 systemd-journald[1178]: Time spent on flushing to /var/log/journal/2f4cb8cdd7c94cbd917b5a02a4aeb39e is 123.044ms for 1016 entries. Sep 11 00:19:58.361228 systemd-journald[1178]: System Journal (/var/log/journal/2f4cb8cdd7c94cbd917b5a02a4aeb39e) is 8M, max 195.6M, 187.6M free. Sep 11 00:19:58.501493 systemd-journald[1178]: Received client request to flush runtime journal. Sep 11 00:19:58.501547 kernel: loop1: detected capacity change from 0 to 146240 Sep 11 00:19:58.501573 kernel: loop2: detected capacity change from 0 to 113872 Sep 11 00:19:58.365488 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:19:58.367622 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:19:58.374087 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:19:58.418649 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:19:58.424186 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:19:58.460922 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 11 00:19:58.460936 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Sep 11 00:19:58.477428 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:19:58.482485 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:19:58.487522 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:19:58.507040 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:19:58.527153 kernel: loop3: detected capacity change from 0 to 229808 Sep 11 00:19:58.574130 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:19:58.596443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:19:58.617169 kernel: loop4: detected capacity change from 0 to 8 Sep 11 00:19:58.628726 kernel: loop5: detected capacity change from 0 to 146240 Sep 11 00:19:58.679119 kernel: loop6: detected capacity change from 0 to 113872 Sep 11 00:19:58.705126 kernel: loop7: detected capacity change from 0 to 229808 Sep 11 00:19:58.709843 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:19:58.715847 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:19:58.724976 (sd-merge)[1252]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 11 00:19:58.725783 (sd-merge)[1252]: Merged extensions into '/usr'. Sep 11 00:19:58.758072 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 11 00:19:58.766356 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Sep 11 00:19:58.780616 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:19:58.783071 systemd[1]: Reload requested from client PID 1202 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:19:58.783112 systemd[1]: Reloading... Sep 11 00:19:59.087736 zram_generator::config[1282]: No configuration found. Sep 11 00:19:59.202837 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:19:59.331930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:19:59.448042 systemd[1]: Reloading finished in 664 ms. Sep 11 00:19:59.464406 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:19:59.466507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:19:59.468610 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:19:59.488384 systemd[1]: Starting ensure-sysext.service... Sep 11 00:19:59.493414 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:19:59.552582 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:19:59.552612 systemd[1]: Reloading... Sep 11 00:19:59.601981 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:19:59.602402 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:19:59.603076 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:19:59.603522 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:19:59.604342 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:19:59.604751 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Sep 11 00:19:59.604936 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Sep 11 00:19:59.610250 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:19:59.610267 systemd-tmpfiles[1328]: Skipping /boot Sep 11 00:19:59.650056 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:19:59.650080 systemd-tmpfiles[1328]: Skipping /boot Sep 11 00:19:59.739161 zram_generator::config[1355]: No configuration found. Sep 11 00:19:59.938250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:20:00.103254 systemd[1]: Reloading finished in 550 ms. Sep 11 00:20:00.118715 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:20:00.133410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:20:00.145363 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:20:00.148479 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:20:00.153768 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:20:00.159408 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:20:00.166841 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:20:00.172359 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:20:00.187276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:20:00.187567 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:20:00.190616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:20:00.195230 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:20:00.200668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:20:00.201641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:20:00.201841 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:20:00.201990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:20:00.209434 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:20:00.209748 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:20:00.210004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:20:00.210193 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:20:00.217602 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:20:00.219182 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:20:00.229480 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:20:00.229805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:20:00.232996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:20:00.235684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:20:00.235871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:20:00.236087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:20:00.242210 systemd[1]: Finished ensure-sysext.service. Sep 11 00:20:00.245354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:20:00.248417 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:20:00.249956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:20:00.250436 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:20:00.259150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:20:00.273472 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:20:00.286562 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:20:00.288253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:20:00.291653 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:20:00.293753 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:20:00.294568 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:20:00.303986 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:20:00.314259 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:20:00.319815 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:20:00.379621 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:20:00.381017 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Sep 11 00:20:00.383639 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:20:00.399945 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:20:00.429222 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:20:00.437047 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:20:00.438057 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:20:00.463229 augenrules[1458]: No rules Sep 11 00:20:00.464668 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:20:00.465049 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:20:00.691627 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Sep 11 00:20:00.691844 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 11 00:20:00.697244 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 11 00:20:00.700052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:20:00.700310 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:20:00.703522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:20:00.706681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:20:00.719363 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:20:00.720389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:20:00.720453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:20:00.720507 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:20:00.720532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 11 00:20:00.770195 kernel: ISO 9660 Extensions: RRIP_1991A Sep 11 00:20:00.803057 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 11 00:20:00.811304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:20:00.811669 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:20:00.813355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:20:00.813980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:20:00.816882 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:20:00.818875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:20:00.828220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:20:00.828334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:20:01.165333 systemd-networkd[1443]: lo: Link UP Sep 11 00:20:01.168181 systemd-networkd[1443]: lo: Gained carrier Sep 11 00:20:01.208358 systemd-networkd[1443]: Enumeration completed Sep 11 00:20:01.209179 systemd-networkd[1443]: eth0: Configuring with /run/systemd/network/10-22:d7:a5:71:e3:8c.network. Sep 11 00:20:01.210175 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:20:01.220161 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:20:01.228778 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:20:01.230490 systemd-networkd[1443]: eth1: Configuring with /run/systemd/network/10-a6:ae:ac:e3:94:62.network. Sep 11 00:20:01.233835 systemd-networkd[1443]: eth0: Link UP Sep 11 00:20:01.239530 systemd-networkd[1443]: eth0: Gained carrier Sep 11 00:20:01.241457 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:20:01.243872 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:20:01.263605 systemd-networkd[1443]: eth1: Link UP Sep 11 00:20:01.271427 systemd-networkd[1443]: eth1: Gained carrier Sep 11 00:20:01.279947 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Sep 11 00:20:01.297347 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:20:01.308815 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:20:01.344180 systemd-resolved[1403]: Positive Trust Anchors: Sep 11 00:20:01.344209 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:20:01.344257 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:20:01.366242 systemd-resolved[1403]: Using system hostname 'ci-4372.1.0-n-d6d7f926f9'. Sep 11 00:20:01.370276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:20:01.372470 systemd[1]: Reached target network.target - Network. Sep 11 00:20:01.373819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:20:01.375403 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:20:01.377141 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:20:01.378524 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:20:01.380248 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 11 00:20:01.381589 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:20:01.383612 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:20:01.385181 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:20:01.385938 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:20:01.385984 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:20:01.386934 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:20:01.390258 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:20:01.395991 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:20:01.417305 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:20:01.425400 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:20:01.426331 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:20:01.448739 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:20:01.465620 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:20:01.488323 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:20:01.490425 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:20:01.492902 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:20:01.500879 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:20:01.506636 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:20:01.507909 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:20:01.507964 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:20:01.512408 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:20:01.517475 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 11 00:20:01.522313 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:20:01.529077 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:20:01.533599 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:20:01.540132 kernel: mousedev: PS/2 mouse device common for all mice Sep 11 00:20:01.540717 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:20:01.541584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:20:01.550572 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 11 00:20:02.576710 systemd-timesyncd[1420]: Contacted time server 23.155.72.147:123 (0.flatcar.pool.ntp.org). Sep 11 00:20:02.576927 systemd-timesyncd[1420]: Initial clock synchronization to Thu 2025-09-11 00:20:02.576222 UTC. Sep 11 00:20:02.584617 systemd-resolved[1403]: Clock change detected. Flushing caches. Sep 11 00:20:02.610886 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 11 00:20:02.609492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:20:02.627084 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:20:02.654767 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:20:02.662001 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:20:02.671971 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:20:02.675464 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:20:02.676375 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:20:02.681557 jq[1514]: false Sep 11 00:20:02.681925 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:20:02.692117 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:20:02.696746 oslogin_cache_refresh[1516]: Refreshing passwd entry cache Sep 11 00:20:02.710010 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing passwd entry cache Sep 11 00:20:02.714166 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:20:02.716114 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:20:02.716443 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:20:02.720864 kernel: ACPI: button: Power Button [PWRF] Sep 11 00:20:02.730166 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting users, quitting Sep 11 00:20:02.730806 oslogin_cache_refresh[1516]: Failure getting users, quitting Sep 11 00:20:02.732256 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:20:02.732256 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing group entry cache Sep 11 00:20:02.730847 oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 11 00:20:02.730929 oslogin_cache_refresh[1516]: Refreshing group entry cache Sep 11 00:20:02.734595 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting groups, quitting Sep 11 00:20:02.734595 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:20:02.733695 oslogin_cache_refresh[1516]: Failure getting groups, quitting Sep 11 00:20:02.733717 oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 11 00:20:02.743927 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 11 00:20:02.745929 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 11 00:20:02.778508 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 11 00:20:02.795953 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 11 00:20:02.801881 dbus-daemon[1512]: [system] SELinux support is enabled Sep 11 00:20:02.802845 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:20:02.816116 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:20:02.816165 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:20:02.818909 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:20:02.819036 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 11 00:20:02.819067 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:20:02.821437 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:20:02.823001 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:20:02.837947 extend-filesystems[1515]: Found /dev/vda6 Sep 11 00:20:02.845892 extend-filesystems[1515]: Found /dev/vda9 Sep 11 00:20:02.854601 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:20:02.879640 tar[1535]: linux-amd64/LICENSE Sep 11 00:20:02.883710 tar[1535]: linux-amd64/helm Sep 11 00:20:02.887612 jq[1525]: true Sep 11 00:20:02.897608 extend-filesystems[1515]: Checking size of /dev/vda9 Sep 11 00:20:02.943201 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:20:02.944863 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:20:02.996413 update_engine[1524]: I20250911 00:20:02.996131 1524 main.cc:92] Flatcar Update Engine starting Sep 11 00:20:02.997368 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:20:03.031925 extend-filesystems[1515]: Resized partition /dev/vda9 Sep 11 00:20:03.043040 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:20:03.044189 update_engine[1524]: I20250911 00:20:03.044097 1524 update_check_scheduler.cc:74] Next update check in 2m51s Sep 11 00:20:03.053606 extend-filesystems[1568]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:20:03.076774 coreos-metadata[1511]: Sep 11 00:20:03.048 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 11 00:20:03.048750 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:20:03.077256 jq[1555]: true Sep 11 00:20:03.105625 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 11 00:20:03.383671 systemd-networkd[1443]: eth0: Gained IPv6LL Sep 11 00:20:03.469347 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:20:03.475653 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:20:03.495695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:20:03.510032 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:20:03.524035 bash[1585]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:20:03.530780 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:20:03.546605 systemd[1]: Starting sshkeys.service... Sep 11 00:20:03.610175 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 11 00:20:03.651978 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:20:03.651978 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 11 00:20:03.651978 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 11 00:20:03.691816 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Sep 11 00:20:03.703801 coreos-metadata[1511]: Sep 11 00:20:03.677 INFO Fetch successful Sep 11 00:20:03.653930 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:20:03.655688 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:20:03.690961 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 11 00:20:03.717475 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 11 00:20:03.811747 systemd-logind[1523]: New seat seat0. Sep 11 00:20:03.819982 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:20:03.903989 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:20:03.998896 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 11 00:20:04.000707 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:20:04.017222 coreos-metadata[1604]: Sep 11 00:20:04.014 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 11 00:20:04.024673 systemd-networkd[1443]: eth1: Gained IPv6LL Sep 11 00:20:04.102854 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:20:04.114899 coreos-metadata[1604]: Sep 11 00:20:04.114 INFO Fetch successful Sep 11 00:20:04.154864 unknown[1604]: wrote ssh authorized keys file for user: core Sep 11 00:20:04.220574 update-ssh-keys[1628]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:20:04.226213 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 11 00:20:04.237207 systemd[1]: Finished sshkeys.service. Sep 11 00:20:04.246839 containerd[1552]: time="2025-09-11T00:20:04Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:20:04.253475 containerd[1552]: time="2025-09-11T00:20:04.253412106Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:20:04.272757 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:20:04.314489 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:20:04.321164 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:20:04.329185 systemd[1]: Started sshd@0-137.184.47.128:22-147.75.109.163:50458.service - OpenSSH per-connection server daemon (147.75.109.163:50458). Sep 11 00:20:04.355572 containerd[1552]: time="2025-09-11T00:20:04.354995282Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.938µs" Sep 11 00:20:04.355572 containerd[1552]: time="2025-09-11T00:20:04.355060812Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:20:04.355572 containerd[1552]: time="2025-09-11T00:20:04.355090705Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:20:04.355572 containerd[1552]: time="2025-09-11T00:20:04.355328604Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:20:04.355572 containerd[1552]: time="2025-09-11T00:20:04.355351894Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:20:04.355572 containerd[1552]: time="2025-09-11T00:20:04.355390073Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:20:04.355572 containerd[1552]: time="2025-09-11T00:20:04.355469359Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:20:04.355572 containerd[1552]: time="2025-09-11T00:20:04.355487335Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:20:04.364631 containerd[1552]: time="2025-09-11T00:20:04.360801933Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:20:04.364631 containerd[1552]: time="2025-09-11T00:20:04.360853863Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:20:04.364631 containerd[1552]: time="2025-09-11T00:20:04.360878918Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:20:04.364631 containerd[1552]: time="2025-09-11T00:20:04.360899888Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:20:04.364631 containerd[1552]: time="2025-09-11T00:20:04.361094031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:20:04.364631 containerd[1552]: time="2025-09-11T00:20:04.361446866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:20:04.364631 containerd[1552]: time="2025-09-11T00:20:04.361499365Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:20:04.367465 containerd[1552]: time="2025-09-11T00:20:04.361519551Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:20:04.367465 containerd[1552]: time="2025-09-11T00:20:04.365175273Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:20:04.369744 containerd[1552]: time="2025-09-11T00:20:04.369244262Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:20:04.369744 containerd[1552]: time="2025-09-11T00:20:04.369439877Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:20:04.389017 containerd[1552]: time="2025-09-11T00:20:04.388868284Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:20:04.389323 containerd[1552]: time="2025-09-11T00:20:04.389287039Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390644139Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390701515Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390726238Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390743183Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390762710Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390803672Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390825979Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390846110Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390861513Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.390882647Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.391126324Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.391162001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.391186626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:20:04.394566 containerd[1552]: time="2025-09-11T00:20:04.391204025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391223046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391238953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391258897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391274214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391293465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391315369Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391331109Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391434561Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391455591Z" level=info msg="Start snapshots syncer" Sep 11 00:20:04.395126 containerd[1552]: time="2025-09-11T00:20:04.391489333Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:20:04.405558 containerd[1552]: time="2025-09-11T00:20:04.402610605Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:20:04.405558 containerd[1552]: time="2025-09-11T00:20:04.403803208Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.410145056Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.410581789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.410652850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.410823978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.410934849Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.410958818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.410981923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.411003295Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.411054682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.411074020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.411093732Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.411152292Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.411177310Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:20:04.412325 containerd[1552]: time="2025-09-11T00:20:04.411193711Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411210370Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411224310Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411242216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411262121Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411324070Z" level=info msg="runtime interface created" Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411335437Z" level=info msg="created NRI interface" Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411349267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411375326Z" level=info msg="Connect containerd service" Sep 11 00:20:04.412911 containerd[1552]: time="2025-09-11T00:20:04.411423997Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:20:04.430085 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:20:04.431927 containerd[1552]: time="2025-09-11T00:20:04.430625032Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:20:04.432238 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:20:04.445519 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:20:04.490582 sshd[1640]: Access denied for user core by PAM account configuration [preauth] Sep 11 00:20:04.490316 systemd[1]: sshd@0-137.184.47.128:22-147.75.109.163:50458.service: Deactivated successfully. Sep 11 00:20:04.608548 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:20:04.614056 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:20:04.625472 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 11 00:20:04.619752 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 11 00:20:04.623027 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:20:04.630177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:20:04.730085 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 11 00:20:04.819084 kernel: Console: switching to colour dummy device 80x25 Sep 11 00:20:04.828622 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 11 00:20:04.828759 kernel: [drm] features: -context_init Sep 11 00:20:04.931977 kernel: [drm] number of scanouts: 1 Sep 11 00:20:04.932087 kernel: [drm] number of cap sets: 0 Sep 11 00:20:04.932113 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Sep 11 00:20:04.936872 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 11 00:20:04.937003 kernel: Console: switching to colour frame buffer device 128x48 Sep 11 00:20:04.942163 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 11 00:20:04.956617 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 11 00:20:04.974756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:20:04.994726 systemd-logind[1523]: Watching system buttons on /dev/input/event2 (Power Button) Sep 11 00:20:05.027419 containerd[1552]: time="2025-09-11T00:20:05.027358574Z" level=info msg="Start subscribing containerd event" Sep 11 00:20:05.028079 containerd[1552]: time="2025-09-11T00:20:05.027596562Z" level=info msg="Start recovering state" Sep 11 00:20:05.028079 containerd[1552]: time="2025-09-11T00:20:05.027739354Z" level=info msg="Start event monitor" Sep 11 00:20:05.028079 containerd[1552]: time="2025-09-11T00:20:05.027759991Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:20:05.028079 containerd[1552]: time="2025-09-11T00:20:05.027773648Z" level=info msg="Start streaming server" Sep 11 00:20:05.028079 containerd[1552]: time="2025-09-11T00:20:05.027786297Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:20:05.028079 containerd[1552]: time="2025-09-11T00:20:05.027796597Z" level=info msg="runtime interface starting up..." Sep 11 00:20:05.028079 containerd[1552]: time="2025-09-11T00:20:05.027809982Z" level=info msg="starting plugins..." Sep 11 00:20:05.028079 containerd[1552]: time="2025-09-11T00:20:05.027830583Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:20:05.029212 containerd[1552]: time="2025-09-11T00:20:05.029173985Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:20:05.029485 containerd[1552]: time="2025-09-11T00:20:05.029454990Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:20:05.033117 containerd[1552]: time="2025-09-11T00:20:05.032154397Z" level=info msg="containerd successfully booted in 0.786021s" Sep 11 00:20:05.032482 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:20:05.128249 kernel: EDAC MC: Ver: 3.0.0 Sep 11 00:20:05.135414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:20:05.136001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:20:05.137190 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:20:05.143642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:20:05.150985 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:20:05.236272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:20:05.236957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:20:05.243932 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:20:05.252373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:20:05.359845 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:20:05.736100 tar[1535]: linux-amd64/README.md Sep 11 00:20:05.763091 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:20:06.294503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:06.295499 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:20:06.297994 systemd[1]: Startup finished in 4.275s (kernel) + 9.127s (initrd) + 8.526s (userspace) = 21.929s. Sep 11 00:20:06.308250 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:20:07.262595 kubelet[1694]: E0911 00:20:07.262423 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:20:07.266458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:20:07.267223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:20:07.267815 systemd[1]: kubelet.service: Consumed 1.735s CPU time, 267.4M memory peak. Sep 11 00:20:14.514871 systemd[1]: Started sshd@1-137.184.47.128:22-147.75.109.163:56876.service - OpenSSH per-connection server daemon (147.75.109.163:56876). Sep 11 00:20:14.582368 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 56876 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:20:14.585733 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:20:14.607573 systemd-logind[1523]: New session 1 of user core. Sep 11 00:20:14.608268 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:20:14.610893 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:20:14.661175 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:20:14.665721 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:20:14.684916 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:20:14.689400 systemd-logind[1523]: New session c1 of user core. Sep 11 00:20:14.880223 systemd[1710]: Queued start job for default target default.target. Sep 11 00:20:14.889131 systemd[1710]: Created slice app.slice - User Application Slice. Sep 11 00:20:14.890581 systemd[1710]: Reached target paths.target - Paths. Sep 11 00:20:14.890681 systemd[1710]: Reached target timers.target - Timers. Sep 11 00:20:14.892843 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:20:14.930615 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:20:14.930791 systemd[1710]: Reached target sockets.target - Sockets. Sep 11 00:20:14.930875 systemd[1710]: Reached target basic.target - Basic System. Sep 11 00:20:14.930928 systemd[1710]: Reached target default.target - Main User Target. Sep 11 00:20:14.930971 systemd[1710]: Startup finished in 228ms. Sep 11 00:20:14.931454 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:20:14.942853 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:20:15.014380 systemd[1]: Started sshd@2-137.184.47.128:22-147.75.109.163:56884.service - OpenSSH per-connection server daemon (147.75.109.163:56884). Sep 11 00:20:15.100701 sshd[1721]: Accepted publickey for core from 147.75.109.163 port 56884 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:20:15.105699 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:20:15.117084 systemd-logind[1523]: New session 2 of user core. Sep 11 00:20:15.123926 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:20:15.203683 sshd[1723]: Connection closed by 147.75.109.163 port 56884 Sep 11 00:20:15.204887 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 11 00:20:15.222342 systemd[1]: sshd@2-137.184.47.128:22-147.75.109.163:56884.service: Deactivated successfully. Sep 11 00:20:15.225407 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:20:15.227570 systemd-logind[1523]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:20:15.238139 systemd[1]: Started sshd@3-137.184.47.128:22-147.75.109.163:56900.service - OpenSSH per-connection server daemon (147.75.109.163:56900). Sep 11 00:20:15.240277 systemd-logind[1523]: Removed session 2. Sep 11 00:20:15.335046 sshd[1729]: Accepted publickey for core from 147.75.109.163 port 56900 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:20:15.340655 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:20:15.351543 systemd-logind[1523]: New session 3 of user core. Sep 11 00:20:15.365918 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:20:15.443927 sshd[1731]: Connection closed by 147.75.109.163 port 56900 Sep 11 00:20:15.444797 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 11 00:20:15.457569 systemd[1]: sshd@3-137.184.47.128:22-147.75.109.163:56900.service: Deactivated successfully. Sep 11 00:20:15.462121 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:20:15.467155 systemd-logind[1523]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:20:15.471331 systemd[1]: Started sshd@4-137.184.47.128:22-147.75.109.163:56906.service - OpenSSH per-connection server daemon (147.75.109.163:56906). Sep 11 00:20:15.474671 systemd-logind[1523]: Removed session 3. Sep 11 00:20:15.550916 sshd[1737]: Accepted publickey for core from 147.75.109.163 port 56906 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:20:15.553223 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:20:15.563445 systemd-logind[1523]: New session 4 of user core. Sep 11 00:20:15.570046 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:20:15.638311 sshd[1739]: Connection closed by 147.75.109.163 port 56906 Sep 11 00:20:15.639186 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Sep 11 00:20:15.657606 systemd[1]: sshd@4-137.184.47.128:22-147.75.109.163:56906.service: Deactivated successfully. Sep 11 00:20:15.662932 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:20:15.666268 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:20:15.673278 systemd[1]: Started sshd@5-137.184.47.128:22-147.75.109.163:56916.service - OpenSSH per-connection server daemon (147.75.109.163:56916). Sep 11 00:20:15.674622 systemd-logind[1523]: Removed session 4. Sep 11 00:20:15.756729 sshd[1745]: Accepted publickey for core from 147.75.109.163 port 56916 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:20:15.758157 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:20:15.771391 systemd-logind[1523]: New session 5 of user core. Sep 11 00:20:15.784900 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:20:15.876982 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:20:15.877440 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:20:15.895909 sudo[1748]: pam_unix(sudo:session): session closed for user root Sep 11 00:20:15.901002 sshd[1747]: Connection closed by 147.75.109.163 port 56916 Sep 11 00:20:15.907672 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Sep 11 00:20:15.917478 systemd[1]: sshd@5-137.184.47.128:22-147.75.109.163:56916.service: Deactivated successfully. Sep 11 00:20:15.921130 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:20:15.927741 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:20:15.933875 systemd[1]: Started sshd@6-137.184.47.128:22-147.75.109.163:56930.service - OpenSSH per-connection server daemon (147.75.109.163:56930). Sep 11 00:20:15.935260 systemd-logind[1523]: Removed session 5. Sep 11 00:20:16.018742 sshd[1754]: Accepted publickey for core from 147.75.109.163 port 56930 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:20:16.021153 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:20:16.034078 systemd-logind[1523]: New session 6 of user core. Sep 11 00:20:16.044988 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:20:16.114807 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:20:16.115334 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:20:16.130681 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 11 00:20:16.140505 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:20:16.141710 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:20:16.167741 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:20:16.243079 augenrules[1780]: No rules Sep 11 00:20:16.245307 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:20:16.246349 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:20:16.248402 sudo[1757]: pam_unix(sudo:session): session closed for user root Sep 11 00:20:16.254673 sshd[1756]: Connection closed by 147.75.109.163 port 56930 Sep 11 00:20:16.255310 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Sep 11 00:20:16.270498 systemd[1]: sshd@6-137.184.47.128:22-147.75.109.163:56930.service: Deactivated successfully. Sep 11 00:20:16.274583 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:20:16.276115 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:20:16.281969 systemd[1]: Started sshd@7-137.184.47.128:22-147.75.109.163:56932.service - OpenSSH per-connection server daemon (147.75.109.163:56932). Sep 11 00:20:16.284660 systemd-logind[1523]: Removed session 6. Sep 11 00:20:16.370389 sshd[1789]: Accepted publickey for core from 147.75.109.163 port 56932 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:20:16.376880 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:20:16.392156 systemd-logind[1523]: New session 7 of user core. Sep 11 00:20:16.398604 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:20:16.480887 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:20:16.481359 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:20:17.203830 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:20:17.228261 (dockerd)[1810]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:20:17.286745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:20:17.289736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:20:17.535056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:17.545076 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:20:17.613903 kubelet[1823]: E0911 00:20:17.613343 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:20:17.617068 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:20:17.617264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:20:17.617687 systemd[1]: kubelet.service: Consumed 234ms CPU time, 108.2M memory peak. Sep 11 00:20:17.709128 dockerd[1810]: time="2025-09-11T00:20:17.709028330Z" level=info msg="Starting up" Sep 11 00:20:17.715585 dockerd[1810]: time="2025-09-11T00:20:17.714686830Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:20:17.765667 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2578531307-merged.mount: Deactivated successfully. Sep 11 00:20:17.803712 dockerd[1810]: time="2025-09-11T00:20:17.803273423Z" level=info msg="Loading containers: start." Sep 11 00:20:17.822593 kernel: Initializing XFRM netlink socket Sep 11 00:20:18.186718 systemd-networkd[1443]: docker0: Link UP Sep 11 00:20:18.193190 dockerd[1810]: time="2025-09-11T00:20:18.193120945Z" level=info msg="Loading containers: done." Sep 11 00:20:18.218279 dockerd[1810]: time="2025-09-11T00:20:18.218189428Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:20:18.218516 dockerd[1810]: time="2025-09-11T00:20:18.218319355Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:20:18.218516 dockerd[1810]: time="2025-09-11T00:20:18.218493010Z" level=info msg="Initializing buildkit" Sep 11 00:20:18.258740 dockerd[1810]: time="2025-09-11T00:20:18.258665022Z" level=info msg="Completed buildkit initialization" Sep 11 00:20:18.270646 dockerd[1810]: time="2025-09-11T00:20:18.270553101Z" level=info msg="Daemon has completed initialization" Sep 11 00:20:18.271495 dockerd[1810]: time="2025-09-11T00:20:18.271062831Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:20:18.271337 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:20:18.760132 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1456859033-merged.mount: Deactivated successfully. Sep 11 00:20:19.327056 containerd[1552]: time="2025-09-11T00:20:19.327001225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 11 00:20:20.237173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1720029204.mount: Deactivated successfully. Sep 11 00:20:21.999179 containerd[1552]: time="2025-09-11T00:20:21.997278795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:21.999179 containerd[1552]: time="2025-09-11T00:20:21.999036774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 11 00:20:21.999179 containerd[1552]: time="2025-09-11T00:20:21.999088302Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:22.003786 containerd[1552]: time="2025-09-11T00:20:22.003726172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:22.005395 containerd[1552]: time="2025-09-11T00:20:22.005345488Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.678281667s" Sep 11 00:20:22.005623 containerd[1552]: time="2025-09-11T00:20:22.005598538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 11 00:20:22.006771 containerd[1552]: time="2025-09-11T00:20:22.006737665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 11 00:20:23.806153 containerd[1552]: time="2025-09-11T00:20:23.806031311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:23.807720 containerd[1552]: time="2025-09-11T00:20:23.807661273Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 11 00:20:23.809054 containerd[1552]: time="2025-09-11T00:20:23.808971579Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:23.812732 containerd[1552]: time="2025-09-11T00:20:23.812650892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:23.814575 containerd[1552]: time="2025-09-11T00:20:23.814380931Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.807444976s" Sep 11 00:20:23.814575 containerd[1552]: time="2025-09-11T00:20:23.814433337Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 11 00:20:23.815513 containerd[1552]: time="2025-09-11T00:20:23.815473728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 11 00:20:25.444615 containerd[1552]: time="2025-09-11T00:20:25.443566519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:25.445184 containerd[1552]: time="2025-09-11T00:20:25.444933270Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 11 00:20:25.448575 containerd[1552]: time="2025-09-11T00:20:25.446472459Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:25.450343 containerd[1552]: time="2025-09-11T00:20:25.450287995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:25.451807 containerd[1552]: time="2025-09-11T00:20:25.451750571Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.636229425s" Sep 11 00:20:25.451807 containerd[1552]: time="2025-09-11T00:20:25.451805224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 11 00:20:25.452479 containerd[1552]: time="2025-09-11T00:20:25.452331513Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 11 00:20:25.490826 systemd-resolved[1403]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 11 00:20:26.779989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422532922.mount: Deactivated successfully. Sep 11 00:20:27.553476 containerd[1552]: time="2025-09-11T00:20:27.553397996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:27.556180 containerd[1552]: time="2025-09-11T00:20:27.556113184Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 11 00:20:27.557469 containerd[1552]: time="2025-09-11T00:20:27.557383379Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:27.560575 containerd[1552]: time="2025-09-11T00:20:27.560494029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:27.561606 containerd[1552]: time="2025-09-11T00:20:27.561346169Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.108971315s" Sep 11 00:20:27.561606 containerd[1552]: time="2025-09-11T00:20:27.561413048Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 11 00:20:27.562283 containerd[1552]: time="2025-09-11T00:20:27.562244154Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 11 00:20:27.786237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 00:20:27.790418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:20:28.016996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:28.029874 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:20:28.119225 kubelet[2119]: E0911 00:20:28.119092 2119 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:20:28.127442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:20:28.128408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:20:28.129279 systemd[1]: kubelet.service: Consumed 239ms CPU time, 110.3M memory peak. Sep 11 00:20:28.164305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745111619.mount: Deactivated successfully. Sep 11 00:20:28.599991 systemd-resolved[1403]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 11 00:20:29.498665 containerd[1552]: time="2025-09-11T00:20:29.498589384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:29.502858 containerd[1552]: time="2025-09-11T00:20:29.502671797Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 11 00:20:29.506887 containerd[1552]: time="2025-09-11T00:20:29.504249793Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:29.512359 containerd[1552]: time="2025-09-11T00:20:29.512256566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:29.514601 containerd[1552]: time="2025-09-11T00:20:29.514405179Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.951940093s" Sep 11 00:20:29.515146 containerd[1552]: time="2025-09-11T00:20:29.514454194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 11 00:20:29.516311 containerd[1552]: time="2025-09-11T00:20:29.515471419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:20:30.048735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066082987.mount: Deactivated successfully. Sep 11 00:20:30.056604 containerd[1552]: time="2025-09-11T00:20:30.055184767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:20:30.056604 containerd[1552]: time="2025-09-11T00:20:30.056604679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 11 00:20:30.058092 containerd[1552]: time="2025-09-11T00:20:30.058040649Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:20:30.062729 containerd[1552]: time="2025-09-11T00:20:30.062657435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:20:30.063907 containerd[1552]: time="2025-09-11T00:20:30.063853152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 548.344932ms" Sep 11 00:20:30.064115 containerd[1552]: time="2025-09-11T00:20:30.064087292Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 11 00:20:30.064765 containerd[1552]: time="2025-09-11T00:20:30.064715639Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 11 00:20:30.835266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161463840.mount: Deactivated successfully. Sep 11 00:20:33.348568 containerd[1552]: time="2025-09-11T00:20:33.347895636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:33.349951 containerd[1552]: time="2025-09-11T00:20:33.349912943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 11 00:20:33.352036 containerd[1552]: time="2025-09-11T00:20:33.351984163Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:33.356079 containerd[1552]: time="2025-09-11T00:20:33.356009772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:33.358899 containerd[1552]: time="2025-09-11T00:20:33.358323338Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.293561735s" Sep 11 00:20:33.358899 containerd[1552]: time="2025-09-11T00:20:33.358395679Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 11 00:20:37.583219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:37.584175 systemd[1]: kubelet.service: Consumed 239ms CPU time, 110.3M memory peak. Sep 11 00:20:37.587727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:20:37.632204 systemd[1]: Reload requested from client PID 2266 ('systemctl') (unit session-7.scope)... Sep 11 00:20:37.632233 systemd[1]: Reloading... Sep 11 00:20:37.799579 zram_generator::config[2309]: No configuration found. Sep 11 00:20:37.958237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:20:38.111294 systemd[1]: Reloading finished in 478 ms. Sep 11 00:20:38.190426 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:20:38.190711 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:20:38.191251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:38.191395 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.4M memory peak. Sep 11 00:20:38.196247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:20:38.401787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:38.414177 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:20:38.493912 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:20:38.494662 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:20:38.494662 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:20:38.496639 kubelet[2364]: I0911 00:20:38.496425 2364 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:20:39.037422 kubelet[2364]: I0911 00:20:39.037356 2364 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:20:39.037782 kubelet[2364]: I0911 00:20:39.037693 2364 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:20:39.039173 kubelet[2364]: I0911 00:20:39.039140 2364 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:20:39.080707 kubelet[2364]: I0911 00:20:39.080577 2364 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:20:39.084515 kubelet[2364]: E0911 00:20:39.083896 2364 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://137.184.47.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 11 00:20:39.105660 kubelet[2364]: I0911 00:20:39.105458 2364 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:20:39.112551 kubelet[2364]: I0911 00:20:39.112474 2364 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:20:39.115053 kubelet[2364]: I0911 00:20:39.114721 2364 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:20:39.120112 kubelet[2364]: I0911 00:20:39.115025 2364 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-d6d7f926f9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:20:39.120112 kubelet[2364]: I0911 00:20:39.120107 2364 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:20:39.120112 kubelet[2364]: I0911 00:20:39.120129 2364 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:20:39.120599 kubelet[2364]: I0911 00:20:39.120355 2364 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:20:39.127179 kubelet[2364]: I0911 00:20:39.126771 2364 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:20:39.127179 kubelet[2364]: I0911 00:20:39.126914 2364 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:20:39.127179 kubelet[2364]: I0911 00:20:39.127041 2364 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:20:39.129280 kubelet[2364]: I0911 00:20:39.129244 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:20:39.135779 kubelet[2364]: E0911 00:20:39.135599 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://137.184.47.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-d6d7f926f9&limit=500&resourceVersion=0\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:20:39.138648 kubelet[2364]: E0911 00:20:39.137712 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://137.184.47.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:20:39.138648 kubelet[2364]: I0911 00:20:39.137871 2364 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:20:39.138648 kubelet[2364]: I0911 00:20:39.138587 2364 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:20:39.141274 kubelet[2364]: W0911 00:20:39.139419 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:20:39.150433 kubelet[2364]: I0911 00:20:39.149652 2364 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:20:39.150433 kubelet[2364]: I0911 00:20:39.149765 2364 server.go:1289] "Started kubelet" Sep 11 00:20:39.167595 kubelet[2364]: I0911 00:20:39.164995 2364 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:20:39.167595 kubelet[2364]: I0911 00:20:39.166483 2364 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:20:39.168175 kubelet[2364]: I0911 00:20:39.167883 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:20:39.179743 kubelet[2364]: I0911 00:20:39.177594 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:20:39.179743 kubelet[2364]: I0911 00:20:39.177993 2364 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:20:39.180638 kubelet[2364]: I0911 00:20:39.180598 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:20:39.182899 kubelet[2364]: I0911 00:20:39.182862 2364 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:20:39.191340 kubelet[2364]: I0911 00:20:39.183040 2364 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:20:39.191340 kubelet[2364]: E0911 00:20:39.183356 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" Sep 11 00:20:39.191340 kubelet[2364]: E0911 00:20:39.183199 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://137.184.47.128:6443/api/v1/namespaces/default/events\": dial tcp 137.184.47.128:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-n-d6d7f926f9.18641272de7dc16e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-n-d6d7f926f9,UID:ci-4372.1.0-n-d6d7f926f9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-n-d6d7f926f9,},FirstTimestamp:2025-09-11 00:20:39.149691246 +0000 UTC m=+0.725309702,LastTimestamp:2025-09-11 00:20:39.149691246 +0000 UTC m=+0.725309702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-n-d6d7f926f9,}" Sep 11 00:20:39.191340 kubelet[2364]: I0911 00:20:39.190256 2364 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:20:39.191340 kubelet[2364]: E0911 00:20:39.190931 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://137.184.47.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 00:20:39.191826 kubelet[2364]: I0911 00:20:39.191360 2364 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:20:39.191826 kubelet[2364]: I0911 00:20:39.191569 2364 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:20:39.193833 kubelet[2364]: E0911 00:20:39.193722 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.47.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-d6d7f926f9?timeout=10s\": dial tcp 137.184.47.128:6443: connect: connection refused" interval="200ms" Sep 11 00:20:39.203565 kubelet[2364]: I0911 00:20:39.202972 2364 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:20:39.214071 kubelet[2364]: E0911 00:20:39.214027 2364 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:20:39.226550 kubelet[2364]: I0911 00:20:39.226468 2364 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:20:39.226550 kubelet[2364]: I0911 00:20:39.226494 2364 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:20:39.226765 kubelet[2364]: I0911 00:20:39.226746 2364 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:20:39.239793 kubelet[2364]: I0911 00:20:39.239646 2364 policy_none.go:49] "None policy: Start" Sep 11 00:20:39.239793 kubelet[2364]: I0911 00:20:39.239691 2364 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:20:39.239793 kubelet[2364]: I0911 00:20:39.239717 2364 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:20:39.241316 kubelet[2364]: I0911 00:20:39.241118 2364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:20:39.244881 kubelet[2364]: I0911 00:20:39.244803 2364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:20:39.244881 kubelet[2364]: I0911 00:20:39.244832 2364 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:20:39.245168 kubelet[2364]: I0911 00:20:39.245097 2364 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:20:39.245168 kubelet[2364]: I0911 00:20:39.245112 2364 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:20:39.245360 kubelet[2364]: E0911 00:20:39.245342 2364 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:20:39.250033 kubelet[2364]: E0911 00:20:39.249991 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://137.184.47.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 11 00:20:39.258760 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:20:39.275584 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:20:39.281382 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:20:39.290674 kubelet[2364]: E0911 00:20:39.290452 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" Sep 11 00:20:39.297612 kubelet[2364]: E0911 00:20:39.297324 2364 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:20:39.297967 kubelet[2364]: I0911 00:20:39.297942 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:20:39.298088 kubelet[2364]: I0911 00:20:39.298048 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:20:39.298789 kubelet[2364]: I0911 00:20:39.298764 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:20:39.300400 kubelet[2364]: E0911 00:20:39.300308 2364 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:20:39.300400 kubelet[2364]: E0911 00:20:39.300365 2364 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-n-d6d7f926f9\" not found" Sep 11 00:20:39.364045 systemd[1]: Created slice kubepods-burstable-podb4f80900fca5593c1184ea8ae80a26cb.slice - libcontainer container kubepods-burstable-podb4f80900fca5593c1184ea8ae80a26cb.slice. Sep 11 00:20:39.378434 kubelet[2364]: E0911 00:20:39.378112 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.384707 systemd[1]: Created slice kubepods-burstable-pod9e7dbb4b3995df9802fb892912c5fec4.slice - libcontainer container kubepods-burstable-pod9e7dbb4b3995df9802fb892912c5fec4.slice. Sep 11 00:20:39.393306 kubelet[2364]: E0911 00:20:39.393264 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.394692 kubelet[2364]: E0911 00:20:39.394645 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.47.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-d6d7f926f9?timeout=10s\": dial tcp 137.184.47.128:6443: connect: connection refused" interval="400ms" Sep 11 00:20:39.397115 systemd[1]: Created slice kubepods-burstable-pod7269d15b00d416bce074a5d3d9902ab8.slice - libcontainer container kubepods-burstable-pod7269d15b00d416bce074a5d3d9902ab8.slice. Sep 11 00:20:39.400231 kubelet[2364]: I0911 00:20:39.400199 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.401250 kubelet[2364]: E0911 00:20:39.400947 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.401925 kubelet[2364]: E0911 00:20:39.401890 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.47.128:6443/api/v1/nodes\": dial tcp 137.184.47.128:6443: connect: connection refused" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.491671 kubelet[2364]: I0911 00:20:39.491611 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4f80900fca5593c1184ea8ae80a26cb-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-d6d7f926f9\" (UID: \"b4f80900fca5593c1184ea8ae80a26cb\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.491975 kubelet[2364]: I0911 00:20:39.491950 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4f80900fca5593c1184ea8ae80a26cb-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-d6d7f926f9\" (UID: \"b4f80900fca5593c1184ea8ae80a26cb\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.492117 kubelet[2364]: I0911 00:20:39.492087 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f80900fca5593c1184ea8ae80a26cb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-d6d7f926f9\" (UID: \"b4f80900fca5593c1184ea8ae80a26cb\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.492230 kubelet[2364]: I0911 00:20:39.492213 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.492487 kubelet[2364]: I0911 00:20:39.492338 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.492487 kubelet[2364]: I0911 00:20:39.492378 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.492487 kubelet[2364]: I0911 00:20:39.492413 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7269d15b00d416bce074a5d3d9902ab8-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-d6d7f926f9\" (UID: \"7269d15b00d416bce074a5d3d9902ab8\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.492674 kubelet[2364]: I0911 00:20:39.492543 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.492674 kubelet[2364]: I0911 00:20:39.492599 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.606107 kubelet[2364]: I0911 00:20:39.605017 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.608614 kubelet[2364]: E0911 00:20:39.608560 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.47.128:6443/api/v1/nodes\": dial tcp 137.184.47.128:6443: connect: connection refused" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:39.679699 kubelet[2364]: E0911 00:20:39.679628 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:39.680412 containerd[1552]: time="2025-09-11T00:20:39.680357615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-d6d7f926f9,Uid:b4f80900fca5593c1184ea8ae80a26cb,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:39.694861 kubelet[2364]: E0911 00:20:39.694813 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:39.696043 containerd[1552]: time="2025-09-11T00:20:39.695713329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-d6d7f926f9,Uid:9e7dbb4b3995df9802fb892912c5fec4,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:39.701994 kubelet[2364]: E0911 00:20:39.701928 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:39.704081 containerd[1552]: time="2025-09-11T00:20:39.703818164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-d6d7f926f9,Uid:7269d15b00d416bce074a5d3d9902ab8,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:39.795888 kubelet[2364]: E0911 00:20:39.795819 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.47.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-d6d7f926f9?timeout=10s\": dial tcp 137.184.47.128:6443: connect: connection refused" interval="800ms" Sep 11 00:20:39.898454 containerd[1552]: time="2025-09-11T00:20:39.897458592Z" level=info msg="connecting to shim c5385569d655cba7c00a8bcb4b55a7c2f3eb143802e28a86323a6438854ca445" address="unix:///run/containerd/s/40485ed6236970a22f9711f30515b9fa8ad01e0b797f48c2015af06f58eed595" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:39.899995 containerd[1552]: time="2025-09-11T00:20:39.899939114Z" level=info msg="connecting to shim 3b7913a1545f65decfc76fac28d5a01dea2e755e67ad4476b776e37a78d2bb3a" address="unix:///run/containerd/s/296f8c10395076df92a916e99567d9697534f79138567ac95962ece8546f4acf" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:39.906970 containerd[1552]: time="2025-09-11T00:20:39.906736987Z" level=info msg="connecting to shim 5bf302cc2d64444d31aa217fb55a045f61b15e38b66ee4a133db7fbe14a3ba23" address="unix:///run/containerd/s/bd9df5dc41f7c08070423965ce131309a376ceea5db258abc1d7c20fb24303ed" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:40.011917 kubelet[2364]: I0911 00:20:40.011301 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:40.011917 kubelet[2364]: E0911 00:20:40.011861 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://137.184.47.128:6443/api/v1/nodes\": dial tcp 137.184.47.128:6443: connect: connection refused" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:40.040208 kubelet[2364]: E0911 00:20:40.040148 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://137.184.47.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:20:40.065049 systemd[1]: Started cri-containerd-3b7913a1545f65decfc76fac28d5a01dea2e755e67ad4476b776e37a78d2bb3a.scope - libcontainer container 3b7913a1545f65decfc76fac28d5a01dea2e755e67ad4476b776e37a78d2bb3a. Sep 11 00:20:40.069078 systemd[1]: Started cri-containerd-c5385569d655cba7c00a8bcb4b55a7c2f3eb143802e28a86323a6438854ca445.scope - libcontainer container c5385569d655cba7c00a8bcb4b55a7c2f3eb143802e28a86323a6438854ca445. Sep 11 00:20:40.076753 systemd[1]: Started cri-containerd-5bf302cc2d64444d31aa217fb55a045f61b15e38b66ee4a133db7fbe14a3ba23.scope - libcontainer container 5bf302cc2d64444d31aa217fb55a045f61b15e38b66ee4a133db7fbe14a3ba23. Sep 11 00:20:40.190122 containerd[1552]: time="2025-09-11T00:20:40.189759229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-n-d6d7f926f9,Uid:7269d15b00d416bce074a5d3d9902ab8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bf302cc2d64444d31aa217fb55a045f61b15e38b66ee4a133db7fbe14a3ba23\"" Sep 11 00:20:40.192947 kubelet[2364]: E0911 00:20:40.192909 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:40.205283 containerd[1552]: time="2025-09-11T00:20:40.205218462Z" level=info msg="CreateContainer within sandbox \"5bf302cc2d64444d31aa217fb55a045f61b15e38b66ee4a133db7fbe14a3ba23\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:20:40.230568 containerd[1552]: time="2025-09-11T00:20:40.229694376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-n-d6d7f926f9,Uid:9e7dbb4b3995df9802fb892912c5fec4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b7913a1545f65decfc76fac28d5a01dea2e755e67ad4476b776e37a78d2bb3a\"" Sep 11 00:20:40.231005 kubelet[2364]: E0911 00:20:40.230948 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:40.231418 containerd[1552]: time="2025-09-11T00:20:40.231382323Z" level=info msg="Container f9d7377f64563db77a50f321bb3caaedcc5f3cd495ccb023cb9b2221ebe3f5fb: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:40.240823 containerd[1552]: time="2025-09-11T00:20:40.240768236Z" level=info msg="CreateContainer within sandbox \"3b7913a1545f65decfc76fac28d5a01dea2e755e67ad4476b776e37a78d2bb3a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:20:40.247053 containerd[1552]: time="2025-09-11T00:20:40.246995082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-n-d6d7f926f9,Uid:b4f80900fca5593c1184ea8ae80a26cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5385569d655cba7c00a8bcb4b55a7c2f3eb143802e28a86323a6438854ca445\"" Sep 11 00:20:40.251206 containerd[1552]: time="2025-09-11T00:20:40.250938852Z" level=info msg="CreateContainer within sandbox \"5bf302cc2d64444d31aa217fb55a045f61b15e38b66ee4a133db7fbe14a3ba23\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f9d7377f64563db77a50f321bb3caaedcc5f3cd495ccb023cb9b2221ebe3f5fb\"" Sep 11 00:20:40.251399 kubelet[2364]: E0911 00:20:40.251334 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:40.256570 containerd[1552]: time="2025-09-11T00:20:40.256161801Z" level=info msg="StartContainer for \"f9d7377f64563db77a50f321bb3caaedcc5f3cd495ccb023cb9b2221ebe3f5fb\"" Sep 11 00:20:40.262316 containerd[1552]: time="2025-09-11T00:20:40.262255210Z" level=info msg="connecting to shim f9d7377f64563db77a50f321bb3caaedcc5f3cd495ccb023cb9b2221ebe3f5fb" address="unix:///run/containerd/s/bd9df5dc41f7c08070423965ce131309a376ceea5db258abc1d7c20fb24303ed" protocol=ttrpc version=3 Sep 11 00:20:40.269108 containerd[1552]: time="2025-09-11T00:20:40.269053171Z" level=info msg="CreateContainer within sandbox \"c5385569d655cba7c00a8bcb4b55a7c2f3eb143802e28a86323a6438854ca445\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:20:40.274938 containerd[1552]: time="2025-09-11T00:20:40.274874035Z" level=info msg="Container 7caf1d70cfdc6abe3310b568147398ca9f2aa07f3f87dfd4be95613dbea40650: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:40.293341 containerd[1552]: time="2025-09-11T00:20:40.293162141Z" level=info msg="CreateContainer within sandbox \"3b7913a1545f65decfc76fac28d5a01dea2e755e67ad4476b776e37a78d2bb3a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7caf1d70cfdc6abe3310b568147398ca9f2aa07f3f87dfd4be95613dbea40650\"" Sep 11 00:20:40.296761 containerd[1552]: time="2025-09-11T00:20:40.296706227Z" level=info msg="StartContainer for \"7caf1d70cfdc6abe3310b568147398ca9f2aa07f3f87dfd4be95613dbea40650\"" Sep 11 00:20:40.300142 containerd[1552]: time="2025-09-11T00:20:40.300089352Z" level=info msg="Container 54c008bd10efddad9365a8dbc84bc04bca7dcf8d0e2c904265cdbcdba5cea9ab: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:40.301144 systemd[1]: Started cri-containerd-f9d7377f64563db77a50f321bb3caaedcc5f3cd495ccb023cb9b2221ebe3f5fb.scope - libcontainer container f9d7377f64563db77a50f321bb3caaedcc5f3cd495ccb023cb9b2221ebe3f5fb. Sep 11 00:20:40.301858 containerd[1552]: time="2025-09-11T00:20:40.300458286Z" level=info msg="connecting to shim 7caf1d70cfdc6abe3310b568147398ca9f2aa07f3f87dfd4be95613dbea40650" address="unix:///run/containerd/s/296f8c10395076df92a916e99567d9697534f79138567ac95962ece8546f4acf" protocol=ttrpc version=3 Sep 11 00:20:40.324560 containerd[1552]: time="2025-09-11T00:20:40.324474873Z" level=info msg="CreateContainer within sandbox \"c5385569d655cba7c00a8bcb4b55a7c2f3eb143802e28a86323a6438854ca445\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54c008bd10efddad9365a8dbc84bc04bca7dcf8d0e2c904265cdbcdba5cea9ab\"" Sep 11 00:20:40.326308 containerd[1552]: time="2025-09-11T00:20:40.326260328Z" level=info msg="StartContainer for \"54c008bd10efddad9365a8dbc84bc04bca7dcf8d0e2c904265cdbcdba5cea9ab\"" Sep 11 00:20:40.329298 containerd[1552]: time="2025-09-11T00:20:40.329236046Z" level=info msg="connecting to shim 54c008bd10efddad9365a8dbc84bc04bca7dcf8d0e2c904265cdbcdba5cea9ab" address="unix:///run/containerd/s/40485ed6236970a22f9711f30515b9fa8ad01e0b797f48c2015af06f58eed595" protocol=ttrpc version=3 Sep 11 00:20:40.352823 systemd[1]: Started cri-containerd-7caf1d70cfdc6abe3310b568147398ca9f2aa07f3f87dfd4be95613dbea40650.scope - libcontainer container 7caf1d70cfdc6abe3310b568147398ca9f2aa07f3f87dfd4be95613dbea40650. Sep 11 00:20:40.374789 systemd[1]: Started cri-containerd-54c008bd10efddad9365a8dbc84bc04bca7dcf8d0e2c904265cdbcdba5cea9ab.scope - libcontainer container 54c008bd10efddad9365a8dbc84bc04bca7dcf8d0e2c904265cdbcdba5cea9ab. Sep 11 00:20:40.403605 kubelet[2364]: E0911 00:20:40.403510 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://137.184.47.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-n-d6d7f926f9&limit=500&resourceVersion=0\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:20:40.444547 containerd[1552]: time="2025-09-11T00:20:40.444115480Z" level=info msg="StartContainer for \"f9d7377f64563db77a50f321bb3caaedcc5f3cd495ccb023cb9b2221ebe3f5fb\" returns successfully" Sep 11 00:20:40.528481 containerd[1552]: time="2025-09-11T00:20:40.528378784Z" level=info msg="StartContainer for \"54c008bd10efddad9365a8dbc84bc04bca7dcf8d0e2c904265cdbcdba5cea9ab\" returns successfully" Sep 11 00:20:40.533207 containerd[1552]: time="2025-09-11T00:20:40.533088353Z" level=info msg="StartContainer for \"7caf1d70cfdc6abe3310b568147398ca9f2aa07f3f87dfd4be95613dbea40650\" returns successfully" Sep 11 00:20:40.544806 kubelet[2364]: E0911 00:20:40.544730 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://137.184.47.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 11 00:20:40.565965 kubelet[2364]: E0911 00:20:40.565912 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://137.184.47.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 137.184.47.128:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 00:20:40.597783 kubelet[2364]: E0911 00:20:40.597456 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://137.184.47.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-n-d6d7f926f9?timeout=10s\": dial tcp 137.184.47.128:6443: connect: connection refused" interval="1.6s" Sep 11 00:20:40.816342 kubelet[2364]: I0911 00:20:40.815892 2364 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:41.285665 kubelet[2364]: E0911 00:20:41.284157 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:41.285665 kubelet[2364]: E0911 00:20:41.284400 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:41.297079 kubelet[2364]: E0911 00:20:41.296739 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:41.297079 kubelet[2364]: E0911 00:20:41.296983 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:41.297829 kubelet[2364]: E0911 00:20:41.297804 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:41.298141 kubelet[2364]: E0911 00:20:41.298110 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:42.300263 kubelet[2364]: E0911 00:20:42.299903 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:42.303132 kubelet[2364]: E0911 00:20:42.303097 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:42.305219 kubelet[2364]: E0911 00:20:42.303281 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:42.305344 kubelet[2364]: E0911 00:20:42.305114 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:42.305617 kubelet[2364]: E0911 00:20:42.305592 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:42.305791 kubelet[2364]: E0911 00:20:42.305592 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:43.139932 kubelet[2364]: I0911 00:20:43.139547 2364 apiserver.go:52] "Watching apiserver" Sep 11 00:20:43.210059 kubelet[2364]: E0911 00:20:43.210017 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.290882 kubelet[2364]: I0911 00:20:43.290700 2364 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:20:43.302563 kubelet[2364]: E0911 00:20:43.302292 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.302563 kubelet[2364]: E0911 00:20:43.302472 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:43.303050 kubelet[2364]: E0911 00:20:43.302839 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-n-d6d7f926f9\" not found" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.303100 kubelet[2364]: E0911 00:20:43.303083 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:43.403495 kubelet[2364]: I0911 00:20:43.402637 2364 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.403495 kubelet[2364]: E0911 00:20:43.402703 2364 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.1.0-n-d6d7f926f9\": node \"ci-4372.1.0-n-d6d7f926f9\" not found" Sep 11 00:20:43.489675 kubelet[2364]: I0911 00:20:43.489623 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.501546 kubelet[2364]: E0911 00:20:43.501096 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-d6d7f926f9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.501546 kubelet[2364]: I0911 00:20:43.501146 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.504317 kubelet[2364]: E0911 00:20:43.504056 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-n-d6d7f926f9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.504317 kubelet[2364]: I0911 00:20:43.504091 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:43.507938 kubelet[2364]: E0911 00:20:43.507881 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:45.144572 kubelet[2364]: I0911 00:20:45.144379 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:45.154218 kubelet[2364]: I0911 00:20:45.154055 2364 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:20:45.154845 kubelet[2364]: E0911 00:20:45.154819 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:45.308129 kubelet[2364]: E0911 00:20:45.308078 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:45.378297 systemd[1]: Reload requested from client PID 2644 ('systemctl') (unit session-7.scope)... Sep 11 00:20:45.378321 systemd[1]: Reloading... Sep 11 00:20:45.520581 zram_generator::config[2687]: No configuration found. Sep 11 00:20:45.680454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:20:45.905251 systemd[1]: Reloading finished in 526 ms. Sep 11 00:20:45.943415 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:20:45.966866 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:20:45.968129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:45.968236 systemd[1]: kubelet.service: Consumed 1.343s CPU time, 125.2M memory peak. Sep 11 00:20:45.973702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:20:46.241696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:20:46.254145 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:20:46.386366 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:20:46.387697 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:20:46.387697 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:20:46.387697 kubelet[2738]: I0911 00:20:46.387045 2738 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:20:46.404896 kubelet[2738]: I0911 00:20:46.403022 2738 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:20:46.404896 kubelet[2738]: I0911 00:20:46.403067 2738 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:20:46.404896 kubelet[2738]: I0911 00:20:46.403376 2738 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:20:46.404896 kubelet[2738]: I0911 00:20:46.404778 2738 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 11 00:20:46.416668 kubelet[2738]: I0911 00:20:46.415573 2738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:20:46.439579 kubelet[2738]: I0911 00:20:46.438449 2738 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:20:46.444130 kubelet[2738]: I0911 00:20:46.444081 2738 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:20:46.444649 kubelet[2738]: I0911 00:20:46.444615 2738 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:20:46.445056 kubelet[2738]: I0911 00:20:46.444763 2738 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-n-d6d7f926f9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:20:46.445256 kubelet[2738]: I0911 00:20:46.445241 2738 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:20:46.445329 kubelet[2738]: I0911 00:20:46.445321 2738 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:20:46.445507 kubelet[2738]: I0911 00:20:46.445495 2738 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:20:46.445847 kubelet[2738]: I0911 00:20:46.445834 2738 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:20:46.446729 kubelet[2738]: I0911 00:20:46.446709 2738 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:20:46.446982 kubelet[2738]: I0911 00:20:46.446971 2738 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:20:46.447098 kubelet[2738]: I0911 00:20:46.447088 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:20:46.457911 kubelet[2738]: I0911 00:20:46.456876 2738 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:20:46.461574 kubelet[2738]: I0911 00:20:46.460384 2738 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:20:46.473184 kubelet[2738]: I0911 00:20:46.472271 2738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:20:46.473184 kubelet[2738]: I0911 00:20:46.472343 2738 server.go:1289] "Started kubelet" Sep 11 00:20:46.482559 kubelet[2738]: I0911 00:20:46.481824 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:20:46.498169 kubelet[2738]: I0911 00:20:46.498008 2738 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:20:46.502056 kubelet[2738]: I0911 00:20:46.502017 2738 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:20:46.520564 kubelet[2738]: I0911 00:20:46.518957 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:20:46.520564 kubelet[2738]: I0911 00:20:46.519350 2738 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:20:46.527968 kubelet[2738]: I0911 00:20:46.527927 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:20:46.539729 kubelet[2738]: I0911 00:20:46.539684 2738 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:20:46.543655 kubelet[2738]: I0911 00:20:46.543606 2738 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:20:46.547651 kubelet[2738]: I0911 00:20:46.547204 2738 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:20:46.552714 kubelet[2738]: I0911 00:20:46.552163 2738 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:20:46.553404 kubelet[2738]: I0911 00:20:46.552978 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:20:46.554128 kubelet[2738]: I0911 00:20:46.554097 2738 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:20:46.561676 kubelet[2738]: I0911 00:20:46.561629 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:20:46.563604 kubelet[2738]: I0911 00:20:46.563575 2738 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:20:46.563788 kubelet[2738]: I0911 00:20:46.563776 2738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:20:46.563854 kubelet[2738]: I0911 00:20:46.563846 2738 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:20:46.563997 kubelet[2738]: E0911 00:20:46.563973 2738 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:20:46.577891 kubelet[2738]: I0911 00:20:46.577851 2738 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:20:46.582881 kubelet[2738]: E0911 00:20:46.580516 2738 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:20:46.664257 kubelet[2738]: E0911 00:20:46.664192 2738 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:20:46.693218 kubelet[2738]: I0911 00:20:46.693083 2738 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:20:46.693453 kubelet[2738]: I0911 00:20:46.693436 2738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:20:46.693597 kubelet[2738]: I0911 00:20:46.693588 2738 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:20:46.694057 kubelet[2738]: I0911 00:20:46.694039 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:20:46.694218 kubelet[2738]: I0911 00:20:46.694151 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:20:46.694218 kubelet[2738]: I0911 00:20:46.694182 2738 policy_none.go:49] "None policy: Start" Sep 11 00:20:46.694218 kubelet[2738]: I0911 00:20:46.694196 2738 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:20:46.694486 kubelet[2738]: I0911 00:20:46.694322 2738 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:20:46.694681 kubelet[2738]: I0911 00:20:46.694671 2738 state_mem.go:75] "Updated machine memory state" Sep 11 00:20:46.702047 kubelet[2738]: E0911 00:20:46.702004 2738 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:20:46.704446 kubelet[2738]: I0911 00:20:46.704260 2738 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:20:46.704873 kubelet[2738]: I0911 00:20:46.704282 2738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:20:46.708599 kubelet[2738]: I0911 00:20:46.708065 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:20:46.714215 kubelet[2738]: E0911 00:20:46.714088 2738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:20:46.821601 kubelet[2738]: I0911 00:20:46.820780 2738 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.835848 kubelet[2738]: I0911 00:20:46.835810 2738 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.836875 kubelet[2738]: I0911 00:20:46.836158 2738 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.867951 kubelet[2738]: I0911 00:20:46.867817 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.871132 kubelet[2738]: I0911 00:20:46.871064 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.871886 kubelet[2738]: I0911 00:20:46.871808 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.882511 kubelet[2738]: I0911 00:20:46.882230 2738 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:20:46.884676 kubelet[2738]: I0911 00:20:46.884646 2738 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:20:46.889191 kubelet[2738]: I0911 00:20:46.889130 2738 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 11 00:20:46.889820 kubelet[2738]: E0911 00:20:46.889518 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-n-d6d7f926f9\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951512 kubelet[2738]: I0911 00:20:46.951150 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4f80900fca5593c1184ea8ae80a26cb-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-n-d6d7f926f9\" (UID: \"b4f80900fca5593c1184ea8ae80a26cb\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951512 kubelet[2738]: I0911 00:20:46.951210 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4f80900fca5593c1184ea8ae80a26cb-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-n-d6d7f926f9\" (UID: \"b4f80900fca5593c1184ea8ae80a26cb\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951512 kubelet[2738]: I0911 00:20:46.951246 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f80900fca5593c1184ea8ae80a26cb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-n-d6d7f926f9\" (UID: \"b4f80900fca5593c1184ea8ae80a26cb\") " pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951512 kubelet[2738]: I0911 00:20:46.951277 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951512 kubelet[2738]: I0911 00:20:46.951304 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951890 kubelet[2738]: I0911 00:20:46.951327 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7269d15b00d416bce074a5d3d9902ab8-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-n-d6d7f926f9\" (UID: \"7269d15b00d416bce074a5d3d9902ab8\") " pod="kube-system/kube-scheduler-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951890 kubelet[2738]: I0911 00:20:46.951351 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951890 kubelet[2738]: I0911 00:20:46.951382 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:46.951890 kubelet[2738]: I0911 00:20:46.951410 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9e7dbb4b3995df9802fb892912c5fec4-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-n-d6d7f926f9\" (UID: \"9e7dbb4b3995df9802fb892912c5fec4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" Sep 11 00:20:47.183621 kubelet[2738]: E0911 00:20:47.183450 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:47.186030 kubelet[2738]: E0911 00:20:47.185929 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:47.191120 kubelet[2738]: E0911 00:20:47.191007 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:47.452999 kubelet[2738]: I0911 00:20:47.452856 2738 apiserver.go:52] "Watching apiserver" Sep 11 00:20:47.544988 kubelet[2738]: I0911 00:20:47.544762 2738 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:20:47.632296 kubelet[2738]: E0911 00:20:47.632234 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:47.633623 kubelet[2738]: E0911 00:20:47.632663 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:47.633623 kubelet[2738]: E0911 00:20:47.632925 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:47.686472 kubelet[2738]: I0911 00:20:47.686240 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-n-d6d7f926f9" podStartSLOduration=1.6861115180000001 podStartE2EDuration="1.686111518s" podCreationTimestamp="2025-09-11 00:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:47.67090481 +0000 UTC m=+1.396954795" watchObservedRunningTime="2025-09-11 00:20:47.686111518 +0000 UTC m=+1.412161506" Sep 11 00:20:47.700627 kubelet[2738]: I0911 00:20:47.700248 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-n-d6d7f926f9" podStartSLOduration=2.700222728 podStartE2EDuration="2.700222728s" podCreationTimestamp="2025-09-11 00:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:47.688351406 +0000 UTC m=+1.414401401" watchObservedRunningTime="2025-09-11 00:20:47.700222728 +0000 UTC m=+1.426272728" Sep 11 00:20:47.715558 kubelet[2738]: I0911 00:20:47.714683 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-n-d6d7f926f9" podStartSLOduration=1.714656212 podStartE2EDuration="1.714656212s" podCreationTimestamp="2025-09-11 00:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:47.701072568 +0000 UTC m=+1.427122551" watchObservedRunningTime="2025-09-11 00:20:47.714656212 +0000 UTC m=+1.440706191" Sep 11 00:20:48.370443 update_engine[1524]: I20250911 00:20:48.369706 1524 update_attempter.cc:509] Updating boot flags... Sep 11 00:20:48.641190 kubelet[2738]: E0911 00:20:48.638353 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:48.650498 kubelet[2738]: E0911 00:20:48.650329 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:48.809965 kubelet[2738]: E0911 00:20:48.809920 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:49.639986 kubelet[2738]: E0911 00:20:49.639938 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:49.640912 kubelet[2738]: E0911 00:20:49.640861 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:50.792842 kubelet[2738]: I0911 00:20:50.792789 2738 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:20:50.793711 containerd[1552]: time="2025-09-11T00:20:50.793661444Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:20:50.794231 kubelet[2738]: I0911 00:20:50.794106 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:20:51.872888 systemd[1]: Created slice kubepods-besteffort-podbeacdfe3_e1d7_4fde_ae92_2c0322dd90df.slice - libcontainer container kubepods-besteffort-podbeacdfe3_e1d7_4fde_ae92_2c0322dd90df.slice. Sep 11 00:20:51.886316 kubelet[2738]: I0911 00:20:51.886079 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxnl4\" (UniqueName: \"kubernetes.io/projected/beacdfe3-e1d7-4fde-ae92-2c0322dd90df-kube-api-access-bxnl4\") pod \"kube-proxy-dgvjt\" (UID: \"beacdfe3-e1d7-4fde-ae92-2c0322dd90df\") " pod="kube-system/kube-proxy-dgvjt" Sep 11 00:20:51.886316 kubelet[2738]: I0911 00:20:51.886140 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/beacdfe3-e1d7-4fde-ae92-2c0322dd90df-kube-proxy\") pod \"kube-proxy-dgvjt\" (UID: \"beacdfe3-e1d7-4fde-ae92-2c0322dd90df\") " pod="kube-system/kube-proxy-dgvjt" Sep 11 00:20:51.886316 kubelet[2738]: I0911 00:20:51.886170 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beacdfe3-e1d7-4fde-ae92-2c0322dd90df-xtables-lock\") pod \"kube-proxy-dgvjt\" (UID: \"beacdfe3-e1d7-4fde-ae92-2c0322dd90df\") " pod="kube-system/kube-proxy-dgvjt" Sep 11 00:20:51.886316 kubelet[2738]: I0911 00:20:51.886198 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beacdfe3-e1d7-4fde-ae92-2c0322dd90df-lib-modules\") pod \"kube-proxy-dgvjt\" (UID: \"beacdfe3-e1d7-4fde-ae92-2c0322dd90df\") " pod="kube-system/kube-proxy-dgvjt" Sep 11 00:20:52.061710 systemd[1]: Created slice kubepods-besteffort-pod77f7c161_c811_4d47_9569_e0976c3efcb0.slice - libcontainer container kubepods-besteffort-pod77f7c161_c811_4d47_9569_e0976c3efcb0.slice. Sep 11 00:20:52.088711 kubelet[2738]: I0911 00:20:52.088458 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77f7c161-c811-4d47-9569-e0976c3efcb0-var-lib-calico\") pod \"tigera-operator-755d956888-qpm4r\" (UID: \"77f7c161-c811-4d47-9569-e0976c3efcb0\") " pod="tigera-operator/tigera-operator-755d956888-qpm4r" Sep 11 00:20:52.088711 kubelet[2738]: I0911 00:20:52.088630 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg46r\" (UniqueName: \"kubernetes.io/projected/77f7c161-c811-4d47-9569-e0976c3efcb0-kube-api-access-wg46r\") pod \"tigera-operator-755d956888-qpm4r\" (UID: \"77f7c161-c811-4d47-9569-e0976c3efcb0\") " pod="tigera-operator/tigera-operator-755d956888-qpm4r" Sep 11 00:20:52.182666 kubelet[2738]: E0911 00:20:52.182259 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:52.183794 containerd[1552]: time="2025-09-11T00:20:52.183728945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgvjt,Uid:beacdfe3-e1d7-4fde-ae92-2c0322dd90df,Namespace:kube-system,Attempt:0,}" Sep 11 00:20:52.220370 containerd[1552]: time="2025-09-11T00:20:52.220296656Z" level=info msg="connecting to shim a35a1cfa48b8463fe36a4cf169638e5cee7cb7a06f0c634c57ff450d38655798" address="unix:///run/containerd/s/1347f118ec57ebca5b37b4ad7d803dfa240deb02c08be92d597a7a7e6fae7f39" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:52.268071 systemd[1]: Started cri-containerd-a35a1cfa48b8463fe36a4cf169638e5cee7cb7a06f0c634c57ff450d38655798.scope - libcontainer container a35a1cfa48b8463fe36a4cf169638e5cee7cb7a06f0c634c57ff450d38655798. Sep 11 00:20:52.314176 containerd[1552]: time="2025-09-11T00:20:52.314093423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgvjt,Uid:beacdfe3-e1d7-4fde-ae92-2c0322dd90df,Namespace:kube-system,Attempt:0,} returns sandbox id \"a35a1cfa48b8463fe36a4cf169638e5cee7cb7a06f0c634c57ff450d38655798\"" Sep 11 00:20:52.315324 kubelet[2738]: E0911 00:20:52.315282 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:52.323158 containerd[1552]: time="2025-09-11T00:20:52.323087871Z" level=info msg="CreateContainer within sandbox \"a35a1cfa48b8463fe36a4cf169638e5cee7cb7a06f0c634c57ff450d38655798\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:20:52.344192 containerd[1552]: time="2025-09-11T00:20:52.344140060Z" level=info msg="Container 069d515e75b1f66315a8df6bfdadc0f7d7455572b571a3596a718b6c4f95e08a: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:52.356835 containerd[1552]: time="2025-09-11T00:20:52.356696805Z" level=info msg="CreateContainer within sandbox \"a35a1cfa48b8463fe36a4cf169638e5cee7cb7a06f0c634c57ff450d38655798\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"069d515e75b1f66315a8df6bfdadc0f7d7455572b571a3596a718b6c4f95e08a\"" Sep 11 00:20:52.357917 containerd[1552]: time="2025-09-11T00:20:52.357848694Z" level=info msg="StartContainer for \"069d515e75b1f66315a8df6bfdadc0f7d7455572b571a3596a718b6c4f95e08a\"" Sep 11 00:20:52.361913 containerd[1552]: time="2025-09-11T00:20:52.361833681Z" level=info msg="connecting to shim 069d515e75b1f66315a8df6bfdadc0f7d7455572b571a3596a718b6c4f95e08a" address="unix:///run/containerd/s/1347f118ec57ebca5b37b4ad7d803dfa240deb02c08be92d597a7a7e6fae7f39" protocol=ttrpc version=3 Sep 11 00:20:52.371896 containerd[1552]: time="2025-09-11T00:20:52.371831309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-qpm4r,Uid:77f7c161-c811-4d47-9569-e0976c3efcb0,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:20:52.395843 systemd[1]: Started cri-containerd-069d515e75b1f66315a8df6bfdadc0f7d7455572b571a3596a718b6c4f95e08a.scope - libcontainer container 069d515e75b1f66315a8df6bfdadc0f7d7455572b571a3596a718b6c4f95e08a. Sep 11 00:20:52.404762 containerd[1552]: time="2025-09-11T00:20:52.404695348Z" level=info msg="connecting to shim a01a8108b0d644b41c5de3b5f3599642dbcb4d4a153da0f68524daea9a35bbcc" address="unix:///run/containerd/s/63f05afbf8149533ad2bad543b1c83dcae72e66a10032c5ebcd64f9528534996" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:20:52.445889 systemd[1]: Started cri-containerd-a01a8108b0d644b41c5de3b5f3599642dbcb4d4a153da0f68524daea9a35bbcc.scope - libcontainer container a01a8108b0d644b41c5de3b5f3599642dbcb4d4a153da0f68524daea9a35bbcc. Sep 11 00:20:52.479415 containerd[1552]: time="2025-09-11T00:20:52.479307672Z" level=info msg="StartContainer for \"069d515e75b1f66315a8df6bfdadc0f7d7455572b571a3596a718b6c4f95e08a\" returns successfully" Sep 11 00:20:52.549902 containerd[1552]: time="2025-09-11T00:20:52.549837401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-qpm4r,Uid:77f7c161-c811-4d47-9569-e0976c3efcb0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a01a8108b0d644b41c5de3b5f3599642dbcb4d4a153da0f68524daea9a35bbcc\"" Sep 11 00:20:52.557743 containerd[1552]: time="2025-09-11T00:20:52.557701558Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:20:52.560028 systemd-resolved[1403]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Sep 11 00:20:52.657544 kubelet[2738]: E0911 00:20:52.657480 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:53.021671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1415676475.mount: Deactivated successfully. Sep 11 00:20:54.479068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044432434.mount: Deactivated successfully. Sep 11 00:20:55.902574 containerd[1552]: time="2025-09-11T00:20:55.901845066Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:55.903994 containerd[1552]: time="2025-09-11T00:20:55.903942192Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 11 00:20:55.905090 containerd[1552]: time="2025-09-11T00:20:55.905023913Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:55.907632 containerd[1552]: time="2025-09-11T00:20:55.907559443Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:20:55.909429 containerd[1552]: time="2025-09-11T00:20:55.908356290Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.350396419s" Sep 11 00:20:55.909429 containerd[1552]: time="2025-09-11T00:20:55.908402444Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 11 00:20:55.915982 containerd[1552]: time="2025-09-11T00:20:55.915894926Z" level=info msg="CreateContainer within sandbox \"a01a8108b0d644b41c5de3b5f3599642dbcb4d4a153da0f68524daea9a35bbcc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:20:55.937045 containerd[1552]: time="2025-09-11T00:20:55.936930290Z" level=info msg="Container 445eaff4e624f3e04801e56562f081bf9e214a997470ef0c86b9471d5fd7f761: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:20:55.946784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531386483.mount: Deactivated successfully. Sep 11 00:20:55.957226 containerd[1552]: time="2025-09-11T00:20:55.957098675Z" level=info msg="CreateContainer within sandbox \"a01a8108b0d644b41c5de3b5f3599642dbcb4d4a153da0f68524daea9a35bbcc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"445eaff4e624f3e04801e56562f081bf9e214a997470ef0c86b9471d5fd7f761\"" Sep 11 00:20:55.960285 containerd[1552]: time="2025-09-11T00:20:55.960226197Z" level=info msg="StartContainer for \"445eaff4e624f3e04801e56562f081bf9e214a997470ef0c86b9471d5fd7f761\"" Sep 11 00:20:55.961924 containerd[1552]: time="2025-09-11T00:20:55.961724172Z" level=info msg="connecting to shim 445eaff4e624f3e04801e56562f081bf9e214a997470ef0c86b9471d5fd7f761" address="unix:///run/containerd/s/63f05afbf8149533ad2bad543b1c83dcae72e66a10032c5ebcd64f9528534996" protocol=ttrpc version=3 Sep 11 00:20:55.998917 systemd[1]: Started cri-containerd-445eaff4e624f3e04801e56562f081bf9e214a997470ef0c86b9471d5fd7f761.scope - libcontainer container 445eaff4e624f3e04801e56562f081bf9e214a997470ef0c86b9471d5fd7f761. Sep 11 00:20:56.051589 containerd[1552]: time="2025-09-11T00:20:56.051389586Z" level=info msg="StartContainer for \"445eaff4e624f3e04801e56562f081bf9e214a997470ef0c86b9471d5fd7f761\" returns successfully" Sep 11 00:20:56.681663 kubelet[2738]: I0911 00:20:56.681550 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dgvjt" podStartSLOduration=5.680942853 podStartE2EDuration="5.680942853s" podCreationTimestamp="2025-09-11 00:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:20:52.676937812 +0000 UTC m=+6.402987807" watchObservedRunningTime="2025-09-11 00:20:56.680942853 +0000 UTC m=+10.406992853" Sep 11 00:20:56.682774 kubelet[2738]: I0911 00:20:56.681810 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-qpm4r" podStartSLOduration=1.329355526 podStartE2EDuration="4.681777734s" podCreationTimestamp="2025-09-11 00:20:52 +0000 UTC" firstStartedPulling="2025-09-11 00:20:52.557096789 +0000 UTC m=+6.283146765" lastFinishedPulling="2025-09-11 00:20:55.909518991 +0000 UTC m=+9.635568973" observedRunningTime="2025-09-11 00:20:56.680774625 +0000 UTC m=+10.406824614" watchObservedRunningTime="2025-09-11 00:20:56.681777734 +0000 UTC m=+10.407827737" Sep 11 00:20:57.693975 kubelet[2738]: E0911 00:20:57.692116 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:58.674998 kubelet[2738]: E0911 00:20:58.674932 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:58.820655 kubelet[2738]: E0911 00:20:58.820593 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:59.000666 kubelet[2738]: E0911 00:20:59.000274 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:20:59.679104 kubelet[2738]: E0911 00:20:59.679045 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:03.943583 sudo[1792]: pam_unix(sudo:session): session closed for user root Sep 11 00:21:03.960958 sshd[1791]: Connection closed by 147.75.109.163 port 56932 Sep 11 00:21:03.965320 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Sep 11 00:21:03.974361 systemd[1]: sshd@7-137.184.47.128:22-147.75.109.163:56932.service: Deactivated successfully. Sep 11 00:21:03.982072 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:21:03.985321 systemd[1]: session-7.scope: Consumed 7.286s CPU time, 160.5M memory peak. Sep 11 00:21:03.990787 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:21:03.997651 systemd-logind[1523]: Removed session 7. Sep 11 00:21:07.844722 systemd[1]: Created slice kubepods-besteffort-podf3ff7c04_0540_4e79_91c2_7ad0490e7f69.slice - libcontainer container kubepods-besteffort-podf3ff7c04_0540_4e79_91c2_7ad0490e7f69.slice. Sep 11 00:21:07.943077 kubelet[2738]: I0911 00:21:07.942931 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ff7c04-0540-4e79-91c2-7ad0490e7f69-tigera-ca-bundle\") pod \"calico-typha-97b9bf959-hnhrc\" (UID: \"f3ff7c04-0540-4e79-91c2-7ad0490e7f69\") " pod="calico-system/calico-typha-97b9bf959-hnhrc" Sep 11 00:21:07.943077 kubelet[2738]: I0911 00:21:07.942979 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f3ff7c04-0540-4e79-91c2-7ad0490e7f69-typha-certs\") pod \"calico-typha-97b9bf959-hnhrc\" (UID: \"f3ff7c04-0540-4e79-91c2-7ad0490e7f69\") " pod="calico-system/calico-typha-97b9bf959-hnhrc" Sep 11 00:21:07.943077 kubelet[2738]: I0911 00:21:07.943003 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b7xl\" (UniqueName: \"kubernetes.io/projected/f3ff7c04-0540-4e79-91c2-7ad0490e7f69-kube-api-access-7b7xl\") pod \"calico-typha-97b9bf959-hnhrc\" (UID: \"f3ff7c04-0540-4e79-91c2-7ad0490e7f69\") " pod="calico-system/calico-typha-97b9bf959-hnhrc" Sep 11 00:21:08.088401 systemd[1]: Created slice kubepods-besteffort-pod10e15e6b_16bf_4628_bf48_1ac3f601d377.slice - libcontainer container kubepods-besteffort-pod10e15e6b_16bf_4628_bf48_1ac3f601d377.slice. Sep 11 00:21:08.145797 kubelet[2738]: I0911 00:21:08.144493 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-lib-modules\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.145797 kubelet[2738]: I0911 00:21:08.144583 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10e15e6b-16bf-4628-bf48-1ac3f601d377-tigera-ca-bundle\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.145797 kubelet[2738]: I0911 00:21:08.144617 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/10e15e6b-16bf-4628-bf48-1ac3f601d377-node-certs\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.145797 kubelet[2738]: I0911 00:21:08.144645 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-var-run-calico\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.145797 kubelet[2738]: I0911 00:21:08.144674 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzk5l\" (UniqueName: \"kubernetes.io/projected/10e15e6b-16bf-4628-bf48-1ac3f601d377-kube-api-access-vzk5l\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.146185 kubelet[2738]: I0911 00:21:08.144706 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-flexvol-driver-host\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.146185 kubelet[2738]: I0911 00:21:08.144734 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-var-lib-calico\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.146185 kubelet[2738]: I0911 00:21:08.144764 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-cni-net-dir\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.146185 kubelet[2738]: I0911 00:21:08.144789 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-xtables-lock\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.146185 kubelet[2738]: I0911 00:21:08.144831 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-cni-log-dir\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.146429 kubelet[2738]: I0911 00:21:08.144860 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-cni-bin-dir\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.146429 kubelet[2738]: I0911 00:21:08.144904 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/10e15e6b-16bf-4628-bf48-1ac3f601d377-policysync\") pod \"calico-node-6t8mr\" (UID: \"10e15e6b-16bf-4628-bf48-1ac3f601d377\") " pod="calico-system/calico-node-6t8mr" Sep 11 00:21:08.152812 kubelet[2738]: E0911 00:21:08.152756 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:08.154552 containerd[1552]: time="2025-09-11T00:21:08.154427463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-97b9bf959-hnhrc,Uid:f3ff7c04-0540-4e79-91c2-7ad0490e7f69,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:08.203265 containerd[1552]: time="2025-09-11T00:21:08.201583893Z" level=info msg="connecting to shim 177eb0f519665819458acfef067e1e82e2443d9fbae83be000a56369b0d7858f" address="unix:///run/containerd/s/3045a93d411b766f6fea96c145a8db8e210a26fd3e1372d34882a44ac92839f9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:08.257827 systemd[1]: Started cri-containerd-177eb0f519665819458acfef067e1e82e2443d9fbae83be000a56369b0d7858f.scope - libcontainer container 177eb0f519665819458acfef067e1e82e2443d9fbae83be000a56369b0d7858f. Sep 11 00:21:08.267491 kubelet[2738]: E0911 00:21:08.266766 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.267491 kubelet[2738]: W0911 00:21:08.266807 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.271559 kubelet[2738]: E0911 00:21:08.271104 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.277953 kubelet[2738]: E0911 00:21:08.277900 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.278172 kubelet[2738]: W0911 00:21:08.278152 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.278560 kubelet[2738]: E0911 00:21:08.278370 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.286490 kubelet[2738]: E0911 00:21:08.286366 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.286490 kubelet[2738]: W0911 00:21:08.286414 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.288144 kubelet[2738]: E0911 00:21:08.286441 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.292782 kubelet[2738]: E0911 00:21:08.292747 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.293148 kubelet[2738]: W0911 00:21:08.293129 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.293667 kubelet[2738]: E0911 00:21:08.293645 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.294292 kubelet[2738]: E0911 00:21:08.294273 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.294954 kubelet[2738]: W0911 00:21:08.294887 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.294954 kubelet[2738]: E0911 00:21:08.294920 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.296963 kubelet[2738]: E0911 00:21:08.296916 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.297228 kubelet[2738]: W0911 00:21:08.296938 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.297228 kubelet[2738]: E0911 00:21:08.297105 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.297979 kubelet[2738]: E0911 00:21:08.297719 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.298128 kubelet[2738]: W0911 00:21:08.298089 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.298128 kubelet[2738]: E0911 00:21:08.298111 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.298936 kubelet[2738]: E0911 00:21:08.298578 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.298936 kubelet[2738]: W0911 00:21:08.298591 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.298936 kubelet[2738]: E0911 00:21:08.298604 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.299453 kubelet[2738]: E0911 00:21:08.299427 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.299656 kubelet[2738]: W0911 00:21:08.299618 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.299656 kubelet[2738]: E0911 00:21:08.299638 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.300924 kubelet[2738]: E0911 00:21:08.300837 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.300924 kubelet[2738]: W0911 00:21:08.300856 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.300924 kubelet[2738]: E0911 00:21:08.300873 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.302464 kubelet[2738]: E0911 00:21:08.301513 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.302653 kubelet[2738]: W0911 00:21:08.302583 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.302653 kubelet[2738]: E0911 00:21:08.302606 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.303234 kubelet[2738]: E0911 00:21:08.303189 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.303234 kubelet[2738]: W0911 00:21:08.303204 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.303234 kubelet[2738]: E0911 00:21:08.303218 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.303854 kubelet[2738]: E0911 00:21:08.303780 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.303854 kubelet[2738]: W0911 00:21:08.303826 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.303854 kubelet[2738]: E0911 00:21:08.303839 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.305695 kubelet[2738]: E0911 00:21:08.305674 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.305968 kubelet[2738]: W0911 00:21:08.305822 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.305968 kubelet[2738]: E0911 00:21:08.305846 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.306731 kubelet[2738]: E0911 00:21:08.306616 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.306731 kubelet[2738]: W0911 00:21:08.306673 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.306731 kubelet[2738]: E0911 00:21:08.306689 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.339610 kubelet[2738]: E0911 00:21:08.339021 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqnt4" podUID="7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c" Sep 11 00:21:08.421576 kubelet[2738]: E0911 00:21:08.419704 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.421576 kubelet[2738]: W0911 00:21:08.419740 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.421963 kubelet[2738]: E0911 00:21:08.421522 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.424763 kubelet[2738]: E0911 00:21:08.423730 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.424763 kubelet[2738]: W0911 00:21:08.423760 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.424763 kubelet[2738]: E0911 00:21:08.423791 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.425505 kubelet[2738]: E0911 00:21:08.425383 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.425505 kubelet[2738]: W0911 00:21:08.425405 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.425942 kubelet[2738]: E0911 00:21:08.425697 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.427658 kubelet[2738]: E0911 00:21:08.427605 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.428082 kubelet[2738]: W0911 00:21:08.427991 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.428082 kubelet[2738]: E0911 00:21:08.428024 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.428737 kubelet[2738]: E0911 00:21:08.428674 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.428737 kubelet[2738]: W0911 00:21:08.428689 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.429744 kubelet[2738]: E0911 00:21:08.429579 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.430237 kubelet[2738]: E0911 00:21:08.430215 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.430445 kubelet[2738]: W0911 00:21:08.430310 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.430445 kubelet[2738]: E0911 00:21:08.430334 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.430893 kubelet[2738]: E0911 00:21:08.430697 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.430893 kubelet[2738]: W0911 00:21:08.430710 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.430893 kubelet[2738]: E0911 00:21:08.430725 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.431751 kubelet[2738]: E0911 00:21:08.431708 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.431908 kubelet[2738]: W0911 00:21:08.431730 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.431908 kubelet[2738]: E0911 00:21:08.431867 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.434360 kubelet[2738]: E0911 00:21:08.434202 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.434360 kubelet[2738]: W0911 00:21:08.434222 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.434360 kubelet[2738]: E0911 00:21:08.434241 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.436782 containerd[1552]: time="2025-09-11T00:21:08.434962211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6t8mr,Uid:10e15e6b-16bf-4628-bf48-1ac3f601d377,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:08.437352 kubelet[2738]: E0911 00:21:08.436950 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.437352 kubelet[2738]: W0911 00:21:08.437283 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.437352 kubelet[2738]: E0911 00:21:08.437314 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.439259 kubelet[2738]: E0911 00:21:08.439233 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.440429 kubelet[2738]: W0911 00:21:08.440092 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.440429 kubelet[2738]: E0911 00:21:08.440133 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.441800 kubelet[2738]: E0911 00:21:08.441652 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.441800 kubelet[2738]: W0911 00:21:08.441671 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.441800 kubelet[2738]: E0911 00:21:08.441694 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.445467 kubelet[2738]: E0911 00:21:08.445240 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.445467 kubelet[2738]: W0911 00:21:08.445273 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.445467 kubelet[2738]: E0911 00:21:08.445300 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.446386 kubelet[2738]: E0911 00:21:08.446034 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.446386 kubelet[2738]: W0911 00:21:08.446057 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.446386 kubelet[2738]: E0911 00:21:08.446077 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.447813 kubelet[2738]: E0911 00:21:08.447761 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.448045 kubelet[2738]: W0911 00:21:08.447925 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.448045 kubelet[2738]: E0911 00:21:08.447953 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.449781 kubelet[2738]: E0911 00:21:08.449703 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.449781 kubelet[2738]: W0911 00:21:08.449724 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.449781 kubelet[2738]: E0911 00:21:08.449746 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.450958 kubelet[2738]: E0911 00:21:08.450730 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.450958 kubelet[2738]: W0911 00:21:08.450749 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.450958 kubelet[2738]: E0911 00:21:08.450769 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.451669 kubelet[2738]: E0911 00:21:08.451648 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.451872 kubelet[2738]: W0911 00:21:08.451851 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.452100 kubelet[2738]: E0911 00:21:08.451952 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.454275 kubelet[2738]: E0911 00:21:08.452383 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.454275 kubelet[2738]: W0911 00:21:08.452399 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.454275 kubelet[2738]: E0911 00:21:08.452419 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.454689 kubelet[2738]: E0911 00:21:08.454601 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.454689 kubelet[2738]: W0911 00:21:08.454628 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.454689 kubelet[2738]: E0911 00:21:08.454652 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.455630 kubelet[2738]: E0911 00:21:08.455606 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.455808 kubelet[2738]: W0911 00:21:08.455787 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.455906 kubelet[2738]: E0911 00:21:08.455888 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.456210 kubelet[2738]: I0911 00:21:08.455989 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c-varrun\") pod \"csi-node-driver-mqnt4\" (UID: \"7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c\") " pod="calico-system/csi-node-driver-mqnt4" Sep 11 00:21:08.457802 kubelet[2738]: E0911 00:21:08.457776 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.458058 kubelet[2738]: W0911 00:21:08.457920 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.458058 kubelet[2738]: E0911 00:21:08.457951 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.458058 kubelet[2738]: I0911 00:21:08.458002 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg6nz\" (UniqueName: \"kubernetes.io/projected/7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c-kube-api-access-tg6nz\") pod \"csi-node-driver-mqnt4\" (UID: \"7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c\") " pod="calico-system/csi-node-driver-mqnt4" Sep 11 00:21:08.458811 kubelet[2738]: E0911 00:21:08.458780 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.458936 kubelet[2738]: W0911 00:21:08.458807 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.459047 kubelet[2738]: E0911 00:21:08.458938 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.460300 kubelet[2738]: E0911 00:21:08.460270 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.460300 kubelet[2738]: W0911 00:21:08.460293 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.460443 kubelet[2738]: E0911 00:21:08.460313 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.461436 kubelet[2738]: E0911 00:21:08.461398 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.461436 kubelet[2738]: W0911 00:21:08.461418 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.461436 kubelet[2738]: E0911 00:21:08.461436 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.461670 kubelet[2738]: I0911 00:21:08.461485 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c-socket-dir\") pod \"csi-node-driver-mqnt4\" (UID: \"7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c\") " pod="calico-system/csi-node-driver-mqnt4" Sep 11 00:21:08.463179 kubelet[2738]: E0911 00:21:08.463057 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.463179 kubelet[2738]: W0911 00:21:08.463081 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.463179 kubelet[2738]: E0911 00:21:08.463105 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.463662 kubelet[2738]: E0911 00:21:08.463647 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.463827 kubelet[2738]: W0911 00:21:08.463712 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.463827 kubelet[2738]: E0911 00:21:08.463728 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.464339 kubelet[2738]: E0911 00:21:08.464320 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.464711 kubelet[2738]: W0911 00:21:08.464465 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.464711 kubelet[2738]: E0911 00:21:08.464484 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.464711 kubelet[2738]: I0911 00:21:08.464641 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c-kubelet-dir\") pod \"csi-node-driver-mqnt4\" (UID: \"7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c\") " pod="calico-system/csi-node-driver-mqnt4" Sep 11 00:21:08.465470 kubelet[2738]: E0911 00:21:08.465447 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.465649 kubelet[2738]: W0911 00:21:08.465630 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.467121 kubelet[2738]: E0911 00:21:08.467083 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.467574 kubelet[2738]: E0911 00:21:08.467555 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.467836 kubelet[2738]: W0911 00:21:08.467813 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.467923 kubelet[2738]: E0911 00:21:08.467909 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.468631 kubelet[2738]: E0911 00:21:08.468308 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.470259 kubelet[2738]: W0911 00:21:08.469584 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.470259 kubelet[2738]: E0911 00:21:08.469619 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.470259 kubelet[2738]: I0911 00:21:08.470074 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c-registration-dir\") pod \"csi-node-driver-mqnt4\" (UID: \"7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c\") " pod="calico-system/csi-node-driver-mqnt4" Sep 11 00:21:08.470259 kubelet[2738]: E0911 00:21:08.470208 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.470259 kubelet[2738]: W0911 00:21:08.470220 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.470259 kubelet[2738]: E0911 00:21:08.470237 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.472742 kubelet[2738]: E0911 00:21:08.472718 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.472863 kubelet[2738]: W0911 00:21:08.472846 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.472983 kubelet[2738]: E0911 00:21:08.472967 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.473651 kubelet[2738]: E0911 00:21:08.473631 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.475396 kubelet[2738]: W0911 00:21:08.475369 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.475551 kubelet[2738]: E0911 00:21:08.475515 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.475793 kubelet[2738]: E0911 00:21:08.475783 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.475848 kubelet[2738]: W0911 00:21:08.475839 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.475890 kubelet[2738]: E0911 00:21:08.475883 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.496920 containerd[1552]: time="2025-09-11T00:21:08.496838989Z" level=info msg="connecting to shim 0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2" address="unix:///run/containerd/s/a512dfe2411ae0fa1743e27d26f24b4acd037cbebc8d4a243ad6625010208f49" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:08.547772 systemd[1]: Started cri-containerd-0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2.scope - libcontainer container 0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2. Sep 11 00:21:08.571508 kubelet[2738]: E0911 00:21:08.571438 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.571508 kubelet[2738]: W0911 00:21:08.571468 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.571894 kubelet[2738]: E0911 00:21:08.571605 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.572974 kubelet[2738]: E0911 00:21:08.572658 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.572974 kubelet[2738]: W0911 00:21:08.572684 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.572974 kubelet[2738]: E0911 00:21:08.572726 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.573341 kubelet[2738]: E0911 00:21:08.573324 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.573681 kubelet[2738]: W0911 00:21:08.573571 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.573681 kubelet[2738]: E0911 00:21:08.573596 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.575506 kubelet[2738]: E0911 00:21:08.575251 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.575506 kubelet[2738]: W0911 00:21:08.575273 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.575506 kubelet[2738]: E0911 00:21:08.575291 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.575866 kubelet[2738]: E0911 00:21:08.575801 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.575866 kubelet[2738]: W0911 00:21:08.575834 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.575866 kubelet[2738]: E0911 00:21:08.575850 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.577605 kubelet[2738]: E0911 00:21:08.577414 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.577605 kubelet[2738]: W0911 00:21:08.577437 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.577605 kubelet[2738]: E0911 00:21:08.577481 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.579015 kubelet[2738]: E0911 00:21:08.578843 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.579015 kubelet[2738]: W0911 00:21:08.578883 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.579015 kubelet[2738]: E0911 00:21:08.578902 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.579829 kubelet[2738]: E0911 00:21:08.579695 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.581086 kubelet[2738]: W0911 00:21:08.579977 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.581086 kubelet[2738]: E0911 00:21:08.580008 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.581649 kubelet[2738]: E0911 00:21:08.581443 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.581876 kubelet[2738]: W0911 00:21:08.581521 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.581876 kubelet[2738]: E0911 00:21:08.581832 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.583754 kubelet[2738]: E0911 00:21:08.583706 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.583982 kubelet[2738]: W0911 00:21:08.583918 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.583982 kubelet[2738]: E0911 00:21:08.583951 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.587459 kubelet[2738]: E0911 00:21:08.586042 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.587459 kubelet[2738]: W0911 00:21:08.586070 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.587459 kubelet[2738]: E0911 00:21:08.586638 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.587459 kubelet[2738]: E0911 00:21:08.587013 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.587459 kubelet[2738]: W0911 00:21:08.587048 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.587459 kubelet[2738]: E0911 00:21:08.587071 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.587459 kubelet[2738]: E0911 00:21:08.587359 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.587459 kubelet[2738]: W0911 00:21:08.587380 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.587459 kubelet[2738]: E0911 00:21:08.587397 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.589466 kubelet[2738]: E0911 00:21:08.589332 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.589466 kubelet[2738]: W0911 00:21:08.589354 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.589466 kubelet[2738]: E0911 00:21:08.589373 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.590044 kubelet[2738]: E0911 00:21:08.589962 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.590044 kubelet[2738]: W0911 00:21:08.589977 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.590044 kubelet[2738]: E0911 00:21:08.589991 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.590901 kubelet[2738]: E0911 00:21:08.590682 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.590901 kubelet[2738]: W0911 00:21:08.590702 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.590901 kubelet[2738]: E0911 00:21:08.590720 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.591869 kubelet[2738]: E0911 00:21:08.591522 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.591869 kubelet[2738]: W0911 00:21:08.591572 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.591869 kubelet[2738]: E0911 00:21:08.591589 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.593968 kubelet[2738]: E0911 00:21:08.593619 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.593968 kubelet[2738]: W0911 00:21:08.593640 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.593968 kubelet[2738]: E0911 00:21:08.593658 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.593968 kubelet[2738]: E0911 00:21:08.593866 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.593968 kubelet[2738]: W0911 00:21:08.593875 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.593968 kubelet[2738]: E0911 00:21:08.593887 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.597019 kubelet[2738]: E0911 00:21:08.596637 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.597019 kubelet[2738]: W0911 00:21:08.596662 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.597019 kubelet[2738]: E0911 00:21:08.596684 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.598273 kubelet[2738]: E0911 00:21:08.597796 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.598435 kubelet[2738]: W0911 00:21:08.598412 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.598743 kubelet[2738]: E0911 00:21:08.598508 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.600745 kubelet[2738]: E0911 00:21:08.600484 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.600745 kubelet[2738]: W0911 00:21:08.600508 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.600745 kubelet[2738]: E0911 00:21:08.600558 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.601420 kubelet[2738]: E0911 00:21:08.601057 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.601420 kubelet[2738]: W0911 00:21:08.601077 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.601420 kubelet[2738]: E0911 00:21:08.601096 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.602232 kubelet[2738]: E0911 00:21:08.601966 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.602232 kubelet[2738]: W0911 00:21:08.602074 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.602232 kubelet[2738]: E0911 00:21:08.602098 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.604317 kubelet[2738]: E0911 00:21:08.604226 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.604317 kubelet[2738]: W0911 00:21:08.604249 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.604317 kubelet[2738]: E0911 00:21:08.604270 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.641303 kubelet[2738]: E0911 00:21:08.641258 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:08.641303 kubelet[2738]: W0911 00:21:08.641291 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:08.641555 kubelet[2738]: E0911 00:21:08.641317 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:08.661960 containerd[1552]: time="2025-09-11T00:21:08.661808979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-97b9bf959-hnhrc,Uid:f3ff7c04-0540-4e79-91c2-7ad0490e7f69,Namespace:calico-system,Attempt:0,} returns sandbox id \"177eb0f519665819458acfef067e1e82e2443d9fbae83be000a56369b0d7858f\"" Sep 11 00:21:08.667760 kubelet[2738]: E0911 00:21:08.667146 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:08.670352 containerd[1552]: time="2025-09-11T00:21:08.670024975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:21:08.744675 containerd[1552]: time="2025-09-11T00:21:08.743271353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6t8mr,Uid:10e15e6b-16bf-4628-bf48-1ac3f601d377,Namespace:calico-system,Attempt:0,} returns sandbox id \"0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2\"" Sep 11 00:21:09.564747 kubelet[2738]: E0911 00:21:09.564682 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqnt4" podUID="7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c" Sep 11 00:21:10.325645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158858831.mount: Deactivated successfully. Sep 11 00:21:11.525573 containerd[1552]: time="2025-09-11T00:21:11.523500687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:11.526503 containerd[1552]: time="2025-09-11T00:21:11.525569355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 11 00:21:11.528061 containerd[1552]: time="2025-09-11T00:21:11.527911777Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:11.534561 containerd[1552]: time="2025-09-11T00:21:11.534265017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:11.535181 containerd[1552]: time="2025-09-11T00:21:11.535070810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.864991661s" Sep 11 00:21:11.535595 containerd[1552]: time="2025-09-11T00:21:11.535568821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 11 00:21:11.540813 containerd[1552]: time="2025-09-11T00:21:11.540754637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:21:11.564498 kubelet[2738]: E0911 00:21:11.564433 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqnt4" podUID="7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c" Sep 11 00:21:11.571817 containerd[1552]: time="2025-09-11T00:21:11.571458980Z" level=info msg="CreateContainer within sandbox \"177eb0f519665819458acfef067e1e82e2443d9fbae83be000a56369b0d7858f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:21:11.591208 containerd[1552]: time="2025-09-11T00:21:11.589232694Z" level=info msg="Container 1c170fea63caff3398da51c8685cb90ab39c9520bd61904a48b5b2c9bfefb566: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:11.600696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651208899.mount: Deactivated successfully. Sep 11 00:21:11.695877 containerd[1552]: time="2025-09-11T00:21:11.695799362Z" level=info msg="CreateContainer within sandbox \"177eb0f519665819458acfef067e1e82e2443d9fbae83be000a56369b0d7858f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c170fea63caff3398da51c8685cb90ab39c9520bd61904a48b5b2c9bfefb566\"" Sep 11 00:21:11.699420 containerd[1552]: time="2025-09-11T00:21:11.699352705Z" level=info msg="StartContainer for \"1c170fea63caff3398da51c8685cb90ab39c9520bd61904a48b5b2c9bfefb566\"" Sep 11 00:21:11.706652 containerd[1552]: time="2025-09-11T00:21:11.705471356Z" level=info msg="connecting to shim 1c170fea63caff3398da51c8685cb90ab39c9520bd61904a48b5b2c9bfefb566" address="unix:///run/containerd/s/3045a93d411b766f6fea96c145a8db8e210a26fd3e1372d34882a44ac92839f9" protocol=ttrpc version=3 Sep 11 00:21:11.759051 systemd[1]: Started cri-containerd-1c170fea63caff3398da51c8685cb90ab39c9520bd61904a48b5b2c9bfefb566.scope - libcontainer container 1c170fea63caff3398da51c8685cb90ab39c9520bd61904a48b5b2c9bfefb566. Sep 11 00:21:11.931990 containerd[1552]: time="2025-09-11T00:21:11.931934353Z" level=info msg="StartContainer for \"1c170fea63caff3398da51c8685cb90ab39c9520bd61904a48b5b2c9bfefb566\" returns successfully" Sep 11 00:21:12.776049 kubelet[2738]: E0911 00:21:12.775995 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:12.792444 kubelet[2738]: E0911 00:21:12.792381 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.792444 kubelet[2738]: W0911 00:21:12.792414 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.792444 kubelet[2738]: E0911 00:21:12.792448 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.793150 kubelet[2738]: E0911 00:21:12.792965 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.793150 kubelet[2738]: W0911 00:21:12.792994 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.793150 kubelet[2738]: E0911 00:21:12.793018 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.793662 kubelet[2738]: E0911 00:21:12.793625 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.793749 kubelet[2738]: W0911 00:21:12.793667 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.793749 kubelet[2738]: E0911 00:21:12.793684 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.796089 kubelet[2738]: E0911 00:21:12.795052 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.796089 kubelet[2738]: W0911 00:21:12.795072 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.796089 kubelet[2738]: E0911 00:21:12.795087 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.796089 kubelet[2738]: E0911 00:21:12.795382 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.796089 kubelet[2738]: W0911 00:21:12.795392 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.796089 kubelet[2738]: E0911 00:21:12.795403 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.796089 kubelet[2738]: E0911 00:21:12.795573 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.796089 kubelet[2738]: W0911 00:21:12.795581 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.796089 kubelet[2738]: E0911 00:21:12.795593 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.796089 kubelet[2738]: E0911 00:21:12.795715 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.796670 kubelet[2738]: W0911 00:21:12.795721 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.796670 kubelet[2738]: E0911 00:21:12.795729 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.796670 kubelet[2738]: E0911 00:21:12.796160 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.796670 kubelet[2738]: W0911 00:21:12.796172 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.796670 kubelet[2738]: E0911 00:21:12.796183 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.796886 kubelet[2738]: E0911 00:21:12.796720 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.796886 kubelet[2738]: W0911 00:21:12.796732 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.796886 kubelet[2738]: E0911 00:21:12.796744 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.797632 kubelet[2738]: E0911 00:21:12.797081 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.797632 kubelet[2738]: W0911 00:21:12.797103 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.797632 kubelet[2738]: E0911 00:21:12.797118 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.797632 kubelet[2738]: E0911 00:21:12.797408 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.797632 kubelet[2738]: W0911 00:21:12.797425 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.797632 kubelet[2738]: E0911 00:21:12.797440 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.799189 kubelet[2738]: E0911 00:21:12.797848 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.799189 kubelet[2738]: W0911 00:21:12.797859 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.799189 kubelet[2738]: E0911 00:21:12.797870 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.799189 kubelet[2738]: E0911 00:21:12.798106 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.799189 kubelet[2738]: W0911 00:21:12.798114 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.799189 kubelet[2738]: E0911 00:21:12.798156 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.799189 kubelet[2738]: E0911 00:21:12.798442 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.799189 kubelet[2738]: W0911 00:21:12.798455 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.799189 kubelet[2738]: E0911 00:21:12.798469 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.799189 kubelet[2738]: E0911 00:21:12.799024 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.800665 kubelet[2738]: W0911 00:21:12.799039 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.800665 kubelet[2738]: E0911 00:21:12.799055 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.808831 kubelet[2738]: I0911 00:21:12.808662 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-97b9bf959-hnhrc" podStartSLOduration=2.940369723 podStartE2EDuration="5.808633789s" podCreationTimestamp="2025-09-11 00:21:07 +0000 UTC" firstStartedPulling="2025-09-11 00:21:08.669671814 +0000 UTC m=+22.395721783" lastFinishedPulling="2025-09-11 00:21:11.537935845 +0000 UTC m=+25.263985849" observedRunningTime="2025-09-11 00:21:12.80845468 +0000 UTC m=+26.534504675" watchObservedRunningTime="2025-09-11 00:21:12.808633789 +0000 UTC m=+26.534683786" Sep 11 00:21:12.816574 kubelet[2738]: E0911 00:21:12.816384 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.816574 kubelet[2738]: W0911 00:21:12.816570 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.817221 kubelet[2738]: E0911 00:21:12.816595 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.817221 kubelet[2738]: E0911 00:21:12.817205 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.817221 kubelet[2738]: W0911 00:21:12.817218 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.817221 kubelet[2738]: E0911 00:21:12.817233 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.818689 kubelet[2738]: E0911 00:21:12.817571 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.818689 kubelet[2738]: W0911 00:21:12.817584 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.818689 kubelet[2738]: E0911 00:21:12.817691 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.819166 kubelet[2738]: E0911 00:21:12.819086 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.819652 kubelet[2738]: W0911 00:21:12.819301 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.819652 kubelet[2738]: E0911 00:21:12.819331 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.821388 kubelet[2738]: E0911 00:21:12.821166 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.822641 kubelet[2738]: W0911 00:21:12.821808 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.822641 kubelet[2738]: E0911 00:21:12.821839 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.823303 kubelet[2738]: E0911 00:21:12.823230 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.823303 kubelet[2738]: W0911 00:21:12.823251 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.823303 kubelet[2738]: E0911 00:21:12.823271 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.824745 kubelet[2738]: E0911 00:21:12.824725 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.824953 kubelet[2738]: W0911 00:21:12.824882 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.824953 kubelet[2738]: E0911 00:21:12.824913 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.825665 kubelet[2738]: E0911 00:21:12.825615 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.825944 kubelet[2738]: W0911 00:21:12.825669 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.825944 kubelet[2738]: E0911 00:21:12.825690 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.828118 kubelet[2738]: E0911 00:21:12.828001 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.828118 kubelet[2738]: W0911 00:21:12.828027 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.828118 kubelet[2738]: E0911 00:21:12.828046 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.829129 kubelet[2738]: E0911 00:21:12.829102 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.829129 kubelet[2738]: W0911 00:21:12.829122 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.829310 kubelet[2738]: E0911 00:21:12.829141 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.829981 kubelet[2738]: E0911 00:21:12.829959 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.829981 kubelet[2738]: W0911 00:21:12.829977 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.830126 kubelet[2738]: E0911 00:21:12.829991 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.831391 kubelet[2738]: E0911 00:21:12.831368 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.831391 kubelet[2738]: W0911 00:21:12.831387 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.831391 kubelet[2738]: E0911 00:21:12.831400 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.831806 kubelet[2738]: E0911 00:21:12.831698 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.831806 kubelet[2738]: W0911 00:21:12.831707 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.831806 kubelet[2738]: E0911 00:21:12.831717 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.833570 kubelet[2738]: E0911 00:21:12.833517 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.833570 kubelet[2738]: W0911 00:21:12.833558 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.833570 kubelet[2738]: E0911 00:21:12.833571 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.834138 kubelet[2738]: E0911 00:21:12.834118 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.835126 kubelet[2738]: W0911 00:21:12.835079 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.835126 kubelet[2738]: E0911 00:21:12.835120 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.840559 kubelet[2738]: E0911 00:21:12.839904 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.840559 kubelet[2738]: W0911 00:21:12.839939 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.846298 kubelet[2738]: E0911 00:21:12.846235 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.849298 kubelet[2738]: E0911 00:21:12.849217 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.849298 kubelet[2738]: W0911 00:21:12.849254 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.849298 kubelet[2738]: E0911 00:21:12.849283 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:12.851015 kubelet[2738]: E0911 00:21:12.850821 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:21:12.851015 kubelet[2738]: W0911 00:21:12.850853 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:21:12.851015 kubelet[2738]: E0911 00:21:12.850878 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:21:13.059882 containerd[1552]: time="2025-09-11T00:21:13.059818011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:13.061398 containerd[1552]: time="2025-09-11T00:21:13.061286166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 11 00:21:13.062184 containerd[1552]: time="2025-09-11T00:21:13.062142570Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:13.064394 containerd[1552]: time="2025-09-11T00:21:13.064325017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:13.065653 containerd[1552]: time="2025-09-11T00:21:13.065413414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.524601072s" Sep 11 00:21:13.065653 containerd[1552]: time="2025-09-11T00:21:13.065461109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 11 00:21:13.075564 containerd[1552]: time="2025-09-11T00:21:13.075253471Z" level=info msg="CreateContainer within sandbox \"0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:21:13.091853 containerd[1552]: time="2025-09-11T00:21:13.091779547Z" level=info msg="Container 625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:13.134419 containerd[1552]: time="2025-09-11T00:21:13.134001168Z" level=info msg="CreateContainer within sandbox \"0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9\"" Sep 11 00:21:13.136149 containerd[1552]: time="2025-09-11T00:21:13.135900997Z" level=info msg="StartContainer for \"625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9\"" Sep 11 00:21:13.140010 containerd[1552]: time="2025-09-11T00:21:13.139861465Z" level=info msg="connecting to shim 625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9" address="unix:///run/containerd/s/a512dfe2411ae0fa1743e27d26f24b4acd037cbebc8d4a243ad6625010208f49" protocol=ttrpc version=3 Sep 11 00:21:13.182917 systemd[1]: Started cri-containerd-625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9.scope - libcontainer container 625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9. Sep 11 00:21:13.267040 containerd[1552]: time="2025-09-11T00:21:13.266993061Z" level=info msg="StartContainer for \"625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9\" returns successfully" Sep 11 00:21:13.290411 systemd[1]: cri-containerd-625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9.scope: Deactivated successfully. Sep 11 00:21:13.330648 containerd[1552]: time="2025-09-11T00:21:13.329914507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9\" id:\"625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9\" pid:3443 exited_at:{seconds:1757550073 nanos:296949560}" Sep 11 00:21:13.340522 containerd[1552]: time="2025-09-11T00:21:13.340446663Z" level=info msg="received exit event container_id:\"625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9\" id:\"625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9\" pid:3443 exited_at:{seconds:1757550073 nanos:296949560}" Sep 11 00:21:13.382602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-625fd087502fe36453b2f4fe9e662f155c86342ad6b7ba6f3f23590765c3dcf9-rootfs.mount: Deactivated successfully. Sep 11 00:21:13.565310 kubelet[2738]: E0911 00:21:13.565199 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqnt4" podUID="7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c" Sep 11 00:21:13.783725 kubelet[2738]: I0911 00:21:13.782853 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:21:13.784250 kubelet[2738]: E0911 00:21:13.783984 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:13.786075 containerd[1552]: time="2025-09-11T00:21:13.786020762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:21:15.565394 kubelet[2738]: E0911 00:21:15.564859 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqnt4" podUID="7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c" Sep 11 00:21:17.403460 kubelet[2738]: I0911 00:21:17.403085 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:21:17.405828 kubelet[2738]: E0911 00:21:17.405672 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:17.565618 kubelet[2738]: E0911 00:21:17.565021 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqnt4" podUID="7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c" Sep 11 00:21:17.798463 kubelet[2738]: E0911 00:21:17.798237 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:19.565144 kubelet[2738]: E0911 00:21:19.565049 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mqnt4" podUID="7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c" Sep 11 00:21:19.651567 containerd[1552]: time="2025-09-11T00:21:19.651013512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:19.653405 containerd[1552]: time="2025-09-11T00:21:19.653357572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 11 00:21:19.654363 containerd[1552]: time="2025-09-11T00:21:19.654285275Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:19.658971 containerd[1552]: time="2025-09-11T00:21:19.658873748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:19.660269 containerd[1552]: time="2025-09-11T00:21:19.659694017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.873328725s" Sep 11 00:21:19.660269 containerd[1552]: time="2025-09-11T00:21:19.659746234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 11 00:21:19.667513 containerd[1552]: time="2025-09-11T00:21:19.667436749Z" level=info msg="CreateContainer within sandbox \"0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:21:19.682669 containerd[1552]: time="2025-09-11T00:21:19.681763989Z" level=info msg="Container c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:19.697590 containerd[1552]: time="2025-09-11T00:21:19.696976195Z" level=info msg="CreateContainer within sandbox \"0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e\"" Sep 11 00:21:19.699655 containerd[1552]: time="2025-09-11T00:21:19.698748065Z" level=info msg="StartContainer for \"c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e\"" Sep 11 00:21:19.700764 containerd[1552]: time="2025-09-11T00:21:19.700731342Z" level=info msg="connecting to shim c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e" address="unix:///run/containerd/s/a512dfe2411ae0fa1743e27d26f24b4acd037cbebc8d4a243ad6625010208f49" protocol=ttrpc version=3 Sep 11 00:21:19.749849 systemd[1]: Started cri-containerd-c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e.scope - libcontainer container c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e. Sep 11 00:21:19.841812 containerd[1552]: time="2025-09-11T00:21:19.841395935Z" level=info msg="StartContainer for \"c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e\" returns successfully" Sep 11 00:21:20.563335 systemd[1]: cri-containerd-c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e.scope: Deactivated successfully. Sep 11 00:21:20.563766 systemd[1]: cri-containerd-c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e.scope: Consumed 690ms CPU time, 168.8M memory peak, 11.1M read from disk, 171.3M written to disk. Sep 11 00:21:20.604332 containerd[1552]: time="2025-09-11T00:21:20.603550054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e\" id:\"c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e\" pid:3502 exited_at:{seconds:1757550080 nanos:569337539}" Sep 11 00:21:20.604332 containerd[1552]: time="2025-09-11T00:21:20.603607285Z" level=info msg="received exit event container_id:\"c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e\" id:\"c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e\" pid:3502 exited_at:{seconds:1757550080 nanos:569337539}" Sep 11 00:21:20.660462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c522724a83020f1d26b678a97b21ced2e88b7c674204a89dcbc4fada5129584e-rootfs.mount: Deactivated successfully. Sep 11 00:21:20.669896 kubelet[2738]: I0911 00:21:20.669852 2738 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:21:20.738934 systemd[1]: Created slice kubepods-burstable-pod412e1c59_365c_4eba_a906_0bb277b80086.slice - libcontainer container kubepods-burstable-pod412e1c59_365c_4eba_a906_0bb277b80086.slice. Sep 11 00:21:20.785900 kubelet[2738]: I0911 00:21:20.784774 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1345181b-31c6-4f82-99c6-cdec28ab1c04-whisker-ca-bundle\") pod \"whisker-85cd94ff8d-s47c9\" (UID: \"1345181b-31c6-4f82-99c6-cdec28ab1c04\") " pod="calico-system/whisker-85cd94ff8d-s47c9" Sep 11 00:21:20.785900 kubelet[2738]: I0911 00:21:20.784844 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/36207bbd-453c-4e37-bd0f-a6d97176618f-goldmane-key-pair\") pod \"goldmane-54d579b49d-6km7x\" (UID: \"36207bbd-453c-4e37-bd0f-a6d97176618f\") " pod="calico-system/goldmane-54d579b49d-6km7x" Sep 11 00:21:20.785900 kubelet[2738]: I0911 00:21:20.784892 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc4jj\" (UniqueName: \"kubernetes.io/projected/1345181b-31c6-4f82-99c6-cdec28ab1c04-kube-api-access-zc4jj\") pod \"whisker-85cd94ff8d-s47c9\" (UID: \"1345181b-31c6-4f82-99c6-cdec28ab1c04\") " pod="calico-system/whisker-85cd94ff8d-s47c9" Sep 11 00:21:20.785900 kubelet[2738]: I0911 00:21:20.784941 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmt8\" (UniqueName: \"kubernetes.io/projected/322511c7-2325-4fa4-b2e6-292071b9159a-kube-api-access-xsmt8\") pod \"calico-apiserver-6d5bf468dc-pklpb\" (UID: \"322511c7-2325-4fa4-b2e6-292071b9159a\") " pod="calico-apiserver/calico-apiserver-6d5bf468dc-pklpb" Sep 11 00:21:20.785900 kubelet[2738]: I0911 00:21:20.784982 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36207bbd-453c-4e37-bd0f-a6d97176618f-config\") pod \"goldmane-54d579b49d-6km7x\" (UID: \"36207bbd-453c-4e37-bd0f-a6d97176618f\") " pod="calico-system/goldmane-54d579b49d-6km7x" Sep 11 00:21:20.786401 kubelet[2738]: I0911 00:21:20.785019 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36207bbd-453c-4e37-bd0f-a6d97176618f-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-6km7x\" (UID: \"36207bbd-453c-4e37-bd0f-a6d97176618f\") " pod="calico-system/goldmane-54d579b49d-6km7x" Sep 11 00:21:20.786401 kubelet[2738]: I0911 00:21:20.785169 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fms2w\" (UniqueName: \"kubernetes.io/projected/412e1c59-365c-4eba-a906-0bb277b80086-kube-api-access-fms2w\") pod \"coredns-674b8bbfcf-59lvw\" (UID: \"412e1c59-365c-4eba-a906-0bb277b80086\") " pod="kube-system/coredns-674b8bbfcf-59lvw" Sep 11 00:21:20.786401 kubelet[2738]: I0911 00:21:20.785211 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1345181b-31c6-4f82-99c6-cdec28ab1c04-whisker-backend-key-pair\") pod \"whisker-85cd94ff8d-s47c9\" (UID: \"1345181b-31c6-4f82-99c6-cdec28ab1c04\") " pod="calico-system/whisker-85cd94ff8d-s47c9" Sep 11 00:21:20.786401 kubelet[2738]: I0911 00:21:20.785252 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvm2\" (UniqueName: \"kubernetes.io/projected/36207bbd-453c-4e37-bd0f-a6d97176618f-kube-api-access-hbvm2\") pod \"goldmane-54d579b49d-6km7x\" (UID: \"36207bbd-453c-4e37-bd0f-a6d97176618f\") " pod="calico-system/goldmane-54d579b49d-6km7x" Sep 11 00:21:20.786401 kubelet[2738]: I0911 00:21:20.785300 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/322511c7-2325-4fa4-b2e6-292071b9159a-calico-apiserver-certs\") pod \"calico-apiserver-6d5bf468dc-pklpb\" (UID: \"322511c7-2325-4fa4-b2e6-292071b9159a\") " pod="calico-apiserver/calico-apiserver-6d5bf468dc-pklpb" Sep 11 00:21:20.786546 kubelet[2738]: I0911 00:21:20.785496 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/412e1c59-365c-4eba-a906-0bb277b80086-config-volume\") pod \"coredns-674b8bbfcf-59lvw\" (UID: \"412e1c59-365c-4eba-a906-0bb277b80086\") " pod="kube-system/coredns-674b8bbfcf-59lvw" Sep 11 00:21:20.799067 systemd[1]: Created slice kubepods-besteffort-pod1345181b_31c6_4f82_99c6_cdec28ab1c04.slice - libcontainer container kubepods-besteffort-pod1345181b_31c6_4f82_99c6_cdec28ab1c04.slice. Sep 11 00:21:20.817285 systemd[1]: Created slice kubepods-besteffort-pod322511c7_2325_4fa4_b2e6_292071b9159a.slice - libcontainer container kubepods-besteffort-pod322511c7_2325_4fa4_b2e6_292071b9159a.slice. Sep 11 00:21:20.832010 systemd[1]: Created slice kubepods-besteffort-pod36207bbd_453c_4e37_bd0f_a6d97176618f.slice - libcontainer container kubepods-besteffort-pod36207bbd_453c_4e37_bd0f_a6d97176618f.slice. Sep 11 00:21:20.853500 systemd[1]: Created slice kubepods-besteffort-pod8e74a5fd_f569_4d8d_b0ea_4824cc4876e2.slice - libcontainer container kubepods-besteffort-pod8e74a5fd_f569_4d8d_b0ea_4824cc4876e2.slice. Sep 11 00:21:20.860250 containerd[1552]: time="2025-09-11T00:21:20.860199488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:21:20.874982 systemd[1]: Created slice kubepods-besteffort-pod12046b83_3ea2_4fdc_9dea_dc738a353e9a.slice - libcontainer container kubepods-besteffort-pod12046b83_3ea2_4fdc_9dea_dc738a353e9a.slice. Sep 11 00:21:20.889254 kubelet[2738]: I0911 00:21:20.889192 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e74a5fd-f569-4d8d-b0ea-4824cc4876e2-tigera-ca-bundle\") pod \"calico-kube-controllers-7bd84ccdd6-j58dq\" (UID: \"8e74a5fd-f569-4d8d-b0ea-4824cc4876e2\") " pod="calico-system/calico-kube-controllers-7bd84ccdd6-j58dq" Sep 11 00:21:20.889482 kubelet[2738]: I0911 00:21:20.889289 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12046b83-3ea2-4fdc-9dea-dc738a353e9a-calico-apiserver-certs\") pod \"calico-apiserver-6d5bf468dc-mhkf5\" (UID: \"12046b83-3ea2-4fdc-9dea-dc738a353e9a\") " pod="calico-apiserver/calico-apiserver-6d5bf468dc-mhkf5" Sep 11 00:21:20.889482 kubelet[2738]: I0911 00:21:20.889328 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cafb586-b280-45c4-b4f2-b083ee293a60-config-volume\") pod \"coredns-674b8bbfcf-rtc4d\" (UID: \"1cafb586-b280-45c4-b4f2-b083ee293a60\") " pod="kube-system/coredns-674b8bbfcf-rtc4d" Sep 11 00:21:20.889482 kubelet[2738]: I0911 00:21:20.889364 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrvd\" (UniqueName: \"kubernetes.io/projected/1cafb586-b280-45c4-b4f2-b083ee293a60-kube-api-access-jbrvd\") pod \"coredns-674b8bbfcf-rtc4d\" (UID: \"1cafb586-b280-45c4-b4f2-b083ee293a60\") " pod="kube-system/coredns-674b8bbfcf-rtc4d" Sep 11 00:21:20.891019 kubelet[2738]: I0911 00:21:20.889678 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2n8g\" (UniqueName: \"kubernetes.io/projected/8e74a5fd-f569-4d8d-b0ea-4824cc4876e2-kube-api-access-c2n8g\") pod \"calico-kube-controllers-7bd84ccdd6-j58dq\" (UID: \"8e74a5fd-f569-4d8d-b0ea-4824cc4876e2\") " pod="calico-system/calico-kube-controllers-7bd84ccdd6-j58dq" Sep 11 00:21:20.891019 kubelet[2738]: I0911 00:21:20.889715 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztjhx\" (UniqueName: \"kubernetes.io/projected/12046b83-3ea2-4fdc-9dea-dc738a353e9a-kube-api-access-ztjhx\") pod \"calico-apiserver-6d5bf468dc-mhkf5\" (UID: \"12046b83-3ea2-4fdc-9dea-dc738a353e9a\") " pod="calico-apiserver/calico-apiserver-6d5bf468dc-mhkf5" Sep 11 00:21:20.889947 systemd[1]: Created slice kubepods-burstable-pod1cafb586_b280_45c4_b4f2_b083ee293a60.slice - libcontainer container kubepods-burstable-pod1cafb586_b280_45c4_b4f2_b083ee293a60.slice. Sep 11 00:21:21.058961 kubelet[2738]: E0911 00:21:21.058895 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:21.063542 containerd[1552]: time="2025-09-11T00:21:21.063281543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-59lvw,Uid:412e1c59-365c-4eba-a906-0bb277b80086,Namespace:kube-system,Attempt:0,}" Sep 11 00:21:21.116811 containerd[1552]: time="2025-09-11T00:21:21.116333806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85cd94ff8d-s47c9,Uid:1345181b-31c6-4f82-99c6-cdec28ab1c04,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:21.133933 containerd[1552]: time="2025-09-11T00:21:21.133854759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bf468dc-pklpb,Uid:322511c7-2325-4fa4-b2e6-292071b9159a,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:21:21.157451 containerd[1552]: time="2025-09-11T00:21:21.156590087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6km7x,Uid:36207bbd-453c-4e37-bd0f-a6d97176618f,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:21.171488 containerd[1552]: time="2025-09-11T00:21:21.171281843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd84ccdd6-j58dq,Uid:8e74a5fd-f569-4d8d-b0ea-4824cc4876e2,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:21.182544 containerd[1552]: time="2025-09-11T00:21:21.181853784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bf468dc-mhkf5,Uid:12046b83-3ea2-4fdc-9dea-dc738a353e9a,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:21:21.203462 kubelet[2738]: E0911 00:21:21.203403 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:21.219462 containerd[1552]: time="2025-09-11T00:21:21.219317710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtc4d,Uid:1cafb586-b280-45c4-b4f2-b083ee293a60,Namespace:kube-system,Attempt:0,}" Sep 11 00:21:21.509125 containerd[1552]: time="2025-09-11T00:21:21.508807191Z" level=error msg="Failed to destroy network for sandbox \"0387c45a93733c5bbbbce7fa4668f25b66db87189bc03a785a8f308d588d6df9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.564570 containerd[1552]: time="2025-09-11T00:21:21.515105413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85cd94ff8d-s47c9,Uid:1345181b-31c6-4f82-99c6-cdec28ab1c04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0387c45a93733c5bbbbce7fa4668f25b66db87189bc03a785a8f308d588d6df9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.564999 containerd[1552]: time="2025-09-11T00:21:21.519155977Z" level=error msg="Failed to destroy network for sandbox \"e7d9579b600a3d6f224e5d5a693b00c5b8b56a6f166ef578b97aa2222823c55f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.574044 kubelet[2738]: E0911 00:21:21.573872 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0387c45a93733c5bbbbce7fa4668f25b66db87189bc03a785a8f308d588d6df9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.574226 kubelet[2738]: E0911 00:21:21.574171 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0387c45a93733c5bbbbce7fa4668f25b66db87189bc03a785a8f308d588d6df9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85cd94ff8d-s47c9" Sep 11 00:21:21.574226 kubelet[2738]: E0911 00:21:21.574201 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0387c45a93733c5bbbbce7fa4668f25b66db87189bc03a785a8f308d588d6df9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85cd94ff8d-s47c9" Sep 11 00:21:21.575498 containerd[1552]: time="2025-09-11T00:21:21.521709927Z" level=error msg="Failed to destroy network for sandbox \"0a709a9350e7fbb8862cfba7931e5f2b3ac0212e12d29b62c28b5c2a0456718b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.576704 kubelet[2738]: E0911 00:21:21.574284 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85cd94ff8d-s47c9_calico-system(1345181b-31c6-4f82-99c6-cdec28ab1c04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85cd94ff8d-s47c9_calico-system(1345181b-31c6-4f82-99c6-cdec28ab1c04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0387c45a93733c5bbbbce7fa4668f25b66db87189bc03a785a8f308d588d6df9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85cd94ff8d-s47c9" podUID="1345181b-31c6-4f82-99c6-cdec28ab1c04" Sep 11 00:21:21.579269 containerd[1552]: time="2025-09-11T00:21:21.579203450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bf468dc-mhkf5,Uid:12046b83-3ea2-4fdc-9dea-dc738a353e9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d9579b600a3d6f224e5d5a693b00c5b8b56a6f166ef578b97aa2222823c55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.580748 containerd[1552]: time="2025-09-11T00:21:21.525037108Z" level=error msg="Failed to destroy network for sandbox \"32fd1e775d0e1050c045acd3c61593613548e49475b17bfa0c3f695d16cfb825\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.582485 kubelet[2738]: E0911 00:21:21.581776 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d9579b600a3d6f224e5d5a693b00c5b8b56a6f166ef578b97aa2222823c55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.582485 kubelet[2738]: E0911 00:21:21.581862 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d9579b600a3d6f224e5d5a693b00c5b8b56a6f166ef578b97aa2222823c55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5bf468dc-mhkf5" Sep 11 00:21:21.582485 kubelet[2738]: E0911 00:21:21.581897 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d9579b600a3d6f224e5d5a693b00c5b8b56a6f166ef578b97aa2222823c55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5bf468dc-mhkf5" Sep 11 00:21:21.582700 kubelet[2738]: E0911 00:21:21.581949 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5bf468dc-mhkf5_calico-apiserver(12046b83-3ea2-4fdc-9dea-dc738a353e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5bf468dc-mhkf5_calico-apiserver(12046b83-3ea2-4fdc-9dea-dc738a353e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7d9579b600a3d6f224e5d5a693b00c5b8b56a6f166ef578b97aa2222823c55f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5bf468dc-mhkf5" podUID="12046b83-3ea2-4fdc-9dea-dc738a353e9a" Sep 11 00:21:21.583657 containerd[1552]: time="2025-09-11T00:21:21.559041008Z" level=error msg="Failed to destroy network for sandbox \"2203afbcd74ffd8d262942644d20d6e94d53d755f5fdee666d6bab2c74da7763\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.584087 containerd[1552]: time="2025-09-11T00:21:21.584020126Z" level=error msg="Failed to destroy network for sandbox \"c829c02e942ce0bcd2df8421697699c79123793fb117dc0f922ffb989a76ba26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.589957 containerd[1552]: time="2025-09-11T00:21:21.589868319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bf468dc-pklpb,Uid:322511c7-2325-4fa4-b2e6-292071b9159a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a709a9350e7fbb8862cfba7931e5f2b3ac0212e12d29b62c28b5c2a0456718b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.592211 containerd[1552]: time="2025-09-11T00:21:21.591858924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-59lvw,Uid:412e1c59-365c-4eba-a906-0bb277b80086,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32fd1e775d0e1050c045acd3c61593613548e49475b17bfa0c3f695d16cfb825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.594060 systemd[1]: Created slice kubepods-besteffort-pod7f5d0d1d_c7d3_4aae_bdf9_519143e1b44c.slice - libcontainer container kubepods-besteffort-pod7f5d0d1d_c7d3_4aae_bdf9_519143e1b44c.slice. Sep 11 00:21:21.594656 kubelet[2738]: E0911 00:21:21.594598 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a709a9350e7fbb8862cfba7931e5f2b3ac0212e12d29b62c28b5c2a0456718b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.594787 kubelet[2738]: E0911 00:21:21.594689 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a709a9350e7fbb8862cfba7931e5f2b3ac0212e12d29b62c28b5c2a0456718b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5bf468dc-pklpb" Sep 11 00:21:21.594787 kubelet[2738]: E0911 00:21:21.594727 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a709a9350e7fbb8862cfba7931e5f2b3ac0212e12d29b62c28b5c2a0456718b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5bf468dc-pklpb" Sep 11 00:21:21.595323 containerd[1552]: time="2025-09-11T00:21:21.595211004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd84ccdd6-j58dq,Uid:8e74a5fd-f569-4d8d-b0ea-4824cc4876e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2203afbcd74ffd8d262942644d20d6e94d53d755f5fdee666d6bab2c74da7763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.596303 containerd[1552]: time="2025-09-11T00:21:21.596265688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6km7x,Uid:36207bbd-453c-4e37-bd0f-a6d97176618f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c829c02e942ce0bcd2df8421697699c79123793fb117dc0f922ffb989a76ba26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.598582 kubelet[2738]: E0911 00:21:21.597666 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c829c02e942ce0bcd2df8421697699c79123793fb117dc0f922ffb989a76ba26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.598582 kubelet[2738]: E0911 00:21:21.597748 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c829c02e942ce0bcd2df8421697699c79123793fb117dc0f922ffb989a76ba26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6km7x" Sep 11 00:21:21.598582 kubelet[2738]: E0911 00:21:21.597780 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c829c02e942ce0bcd2df8421697699c79123793fb117dc0f922ffb989a76ba26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6km7x" Sep 11 00:21:21.598582 kubelet[2738]: E0911 00:21:21.597838 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32fd1e775d0e1050c045acd3c61593613548e49475b17bfa0c3f695d16cfb825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.598798 kubelet[2738]: E0911 00:21:21.597866 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32fd1e775d0e1050c045acd3c61593613548e49475b17bfa0c3f695d16cfb825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-59lvw" Sep 11 00:21:21.598798 kubelet[2738]: E0911 00:21:21.597889 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32fd1e775d0e1050c045acd3c61593613548e49475b17bfa0c3f695d16cfb825\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-59lvw" Sep 11 00:21:21.598798 kubelet[2738]: E0911 00:21:21.597926 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2203afbcd74ffd8d262942644d20d6e94d53d755f5fdee666d6bab2c74da7763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.598798 kubelet[2738]: E0911 00:21:21.597951 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2203afbcd74ffd8d262942644d20d6e94d53d755f5fdee666d6bab2c74da7763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bd84ccdd6-j58dq" Sep 11 00:21:21.598909 kubelet[2738]: E0911 00:21:21.597974 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2203afbcd74ffd8d262942644d20d6e94d53d755f5fdee666d6bab2c74da7763\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bd84ccdd6-j58dq" Sep 11 00:21:21.600788 kubelet[2738]: E0911 00:21:21.600710 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5bf468dc-pklpb_calico-apiserver(322511c7-2325-4fa4-b2e6-292071b9159a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5bf468dc-pklpb_calico-apiserver(322511c7-2325-4fa4-b2e6-292071b9159a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a709a9350e7fbb8862cfba7931e5f2b3ac0212e12d29b62c28b5c2a0456718b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5bf468dc-pklpb" podUID="322511c7-2325-4fa4-b2e6-292071b9159a" Sep 11 00:21:21.600924 kubelet[2738]: E0911 00:21:21.600811 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-6km7x_calico-system(36207bbd-453c-4e37-bd0f-a6d97176618f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-6km7x_calico-system(36207bbd-453c-4e37-bd0f-a6d97176618f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c829c02e942ce0bcd2df8421697699c79123793fb117dc0f922ffb989a76ba26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-6km7x" podUID="36207bbd-453c-4e37-bd0f-a6d97176618f" Sep 11 00:21:21.600924 kubelet[2738]: E0911 00:21:21.600859 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-59lvw_kube-system(412e1c59-365c-4eba-a906-0bb277b80086)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-59lvw_kube-system(412e1c59-365c-4eba-a906-0bb277b80086)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32fd1e775d0e1050c045acd3c61593613548e49475b17bfa0c3f695d16cfb825\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-59lvw" podUID="412e1c59-365c-4eba-a906-0bb277b80086" Sep 11 00:21:21.601070 kubelet[2738]: E0911 00:21:21.600901 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bd84ccdd6-j58dq_calico-system(8e74a5fd-f569-4d8d-b0ea-4824cc4876e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bd84ccdd6-j58dq_calico-system(8e74a5fd-f569-4d8d-b0ea-4824cc4876e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2203afbcd74ffd8d262942644d20d6e94d53d755f5fdee666d6bab2c74da7763\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bd84ccdd6-j58dq" podUID="8e74a5fd-f569-4d8d-b0ea-4824cc4876e2" Sep 11 00:21:21.606694 containerd[1552]: time="2025-09-11T00:21:21.605990688Z" level=error msg="Failed to destroy network for sandbox \"f7a308056422a42d927da1b88f0389310a034a289f8bdbbf4ef2d7b9ec09097a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.610482 containerd[1552]: time="2025-09-11T00:21:21.608825527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mqnt4,Uid:7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:21.614213 containerd[1552]: time="2025-09-11T00:21:21.613696722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtc4d,Uid:1cafb586-b280-45c4-b4f2-b083ee293a60,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7a308056422a42d927da1b88f0389310a034a289f8bdbbf4ef2d7b9ec09097a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.615477 kubelet[2738]: E0911 00:21:21.615316 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7a308056422a42d927da1b88f0389310a034a289f8bdbbf4ef2d7b9ec09097a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.615477 kubelet[2738]: E0911 00:21:21.615383 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7a308056422a42d927da1b88f0389310a034a289f8bdbbf4ef2d7b9ec09097a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rtc4d" Sep 11 00:21:21.615477 kubelet[2738]: E0911 00:21:21.615406 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7a308056422a42d927da1b88f0389310a034a289f8bdbbf4ef2d7b9ec09097a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rtc4d" Sep 11 00:21:21.615680 kubelet[2738]: E0911 00:21:21.615455 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rtc4d_kube-system(1cafb586-b280-45c4-b4f2-b083ee293a60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rtc4d_kube-system(1cafb586-b280-45c4-b4f2-b083ee293a60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7a308056422a42d927da1b88f0389310a034a289f8bdbbf4ef2d7b9ec09097a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rtc4d" podUID="1cafb586-b280-45c4-b4f2-b083ee293a60" Sep 11 00:21:21.732360 containerd[1552]: time="2025-09-11T00:21:21.732296065Z" level=error msg="Failed to destroy network for sandbox \"74eb1ff04e93614238e1aad7cbc14c37d307fcf9ce344251ffe6f78dc08a56fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.734175 containerd[1552]: time="2025-09-11T00:21:21.734093386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mqnt4,Uid:7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"74eb1ff04e93614238e1aad7cbc14c37d307fcf9ce344251ffe6f78dc08a56fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.734729 kubelet[2738]: E0911 00:21:21.734616 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74eb1ff04e93614238e1aad7cbc14c37d307fcf9ce344251ffe6f78dc08a56fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:21:21.735180 kubelet[2738]: E0911 00:21:21.734735 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74eb1ff04e93614238e1aad7cbc14c37d307fcf9ce344251ffe6f78dc08a56fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mqnt4" Sep 11 00:21:21.735180 kubelet[2738]: E0911 00:21:21.734800 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74eb1ff04e93614238e1aad7cbc14c37d307fcf9ce344251ffe6f78dc08a56fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mqnt4" Sep 11 00:21:21.735479 kubelet[2738]: E0911 00:21:21.735427 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mqnt4_calico-system(7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mqnt4_calico-system(7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74eb1ff04e93614238e1aad7cbc14c37d307fcf9ce344251ffe6f78dc08a56fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mqnt4" podUID="7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c" Sep 11 00:21:28.223972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866899306.mount: Deactivated successfully. Sep 11 00:21:28.254941 containerd[1552]: time="2025-09-11T00:21:28.254843037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:28.256636 containerd[1552]: time="2025-09-11T00:21:28.256581889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 11 00:21:28.257812 containerd[1552]: time="2025-09-11T00:21:28.257030273Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:28.258904 containerd[1552]: time="2025-09-11T00:21:28.258862960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:28.259879 containerd[1552]: time="2025-09-11T00:21:28.259839855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.399588032s" Sep 11 00:21:28.260003 containerd[1552]: time="2025-09-11T00:21:28.259986641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 11 00:21:28.303847 containerd[1552]: time="2025-09-11T00:21:28.303768038Z" level=info msg="CreateContainer within sandbox \"0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:21:28.322567 containerd[1552]: time="2025-09-11T00:21:28.322483026Z" level=info msg="Container cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:28.327856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307022381.mount: Deactivated successfully. Sep 11 00:21:28.346248 containerd[1552]: time="2025-09-11T00:21:28.346160394Z" level=info msg="CreateContainer within sandbox \"0bc7cb578c053095de4eeb03cf74e8ec208716ad1354cdd7f7574909d8248be2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e\"" Sep 11 00:21:28.347767 containerd[1552]: time="2025-09-11T00:21:28.347642722Z" level=info msg="StartContainer for \"cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e\"" Sep 11 00:21:28.351466 containerd[1552]: time="2025-09-11T00:21:28.351351929Z" level=info msg="connecting to shim cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e" address="unix:///run/containerd/s/a512dfe2411ae0fa1743e27d26f24b4acd037cbebc8d4a243ad6625010208f49" protocol=ttrpc version=3 Sep 11 00:21:28.508915 systemd[1]: Started cri-containerd-cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e.scope - libcontainer container cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e. Sep 11 00:21:28.591978 containerd[1552]: time="2025-09-11T00:21:28.591893011Z" level=info msg="StartContainer for \"cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e\" returns successfully" Sep 11 00:21:28.869639 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:21:28.869825 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:21:28.949413 kubelet[2738]: I0911 00:21:28.949313 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6t8mr" podStartSLOduration=1.43447734 podStartE2EDuration="20.949268264s" podCreationTimestamp="2025-09-11 00:21:08 +0000 UTC" firstStartedPulling="2025-09-11 00:21:08.752444173 +0000 UTC m=+22.478494134" lastFinishedPulling="2025-09-11 00:21:28.267235082 +0000 UTC m=+41.993285058" observedRunningTime="2025-09-11 00:21:28.948813961 +0000 UTC m=+42.674863950" watchObservedRunningTime="2025-09-11 00:21:28.949268264 +0000 UTC m=+42.675318252" Sep 11 00:21:29.174771 kubelet[2738]: I0911 00:21:29.174514 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1345181b-31c6-4f82-99c6-cdec28ab1c04-whisker-ca-bundle\") pod \"1345181b-31c6-4f82-99c6-cdec28ab1c04\" (UID: \"1345181b-31c6-4f82-99c6-cdec28ab1c04\") " Sep 11 00:21:29.174771 kubelet[2738]: I0911 00:21:29.174604 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc4jj\" (UniqueName: \"kubernetes.io/projected/1345181b-31c6-4f82-99c6-cdec28ab1c04-kube-api-access-zc4jj\") pod \"1345181b-31c6-4f82-99c6-cdec28ab1c04\" (UID: \"1345181b-31c6-4f82-99c6-cdec28ab1c04\") " Sep 11 00:21:29.174771 kubelet[2738]: I0911 00:21:29.174635 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1345181b-31c6-4f82-99c6-cdec28ab1c04-whisker-backend-key-pair\") pod \"1345181b-31c6-4f82-99c6-cdec28ab1c04\" (UID: \"1345181b-31c6-4f82-99c6-cdec28ab1c04\") " Sep 11 00:21:29.177499 kubelet[2738]: I0911 00:21:29.176585 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1345181b-31c6-4f82-99c6-cdec28ab1c04-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1345181b-31c6-4f82-99c6-cdec28ab1c04" (UID: "1345181b-31c6-4f82-99c6-cdec28ab1c04"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:21:29.182284 kubelet[2738]: I0911 00:21:29.182218 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1345181b-31c6-4f82-99c6-cdec28ab1c04-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1345181b-31c6-4f82-99c6-cdec28ab1c04" (UID: "1345181b-31c6-4f82-99c6-cdec28ab1c04"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:21:29.183416 kubelet[2738]: I0911 00:21:29.183372 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1345181b-31c6-4f82-99c6-cdec28ab1c04-kube-api-access-zc4jj" (OuterVolumeSpecName: "kube-api-access-zc4jj") pod "1345181b-31c6-4f82-99c6-cdec28ab1c04" (UID: "1345181b-31c6-4f82-99c6-cdec28ab1c04"). InnerVolumeSpecName "kube-api-access-zc4jj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:21:29.225418 systemd[1]: var-lib-kubelet-pods-1345181b\x2d31c6\x2d4f82\x2d99c6\x2dcdec28ab1c04-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzc4jj.mount: Deactivated successfully. Sep 11 00:21:29.227164 systemd[1]: var-lib-kubelet-pods-1345181b\x2d31c6\x2d4f82\x2d99c6\x2dcdec28ab1c04-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:21:29.276230 kubelet[2738]: I0911 00:21:29.275901 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1345181b-31c6-4f82-99c6-cdec28ab1c04-whisker-ca-bundle\") on node \"ci-4372.1.0-n-d6d7f926f9\" DevicePath \"\"" Sep 11 00:21:29.276230 kubelet[2738]: I0911 00:21:29.275950 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zc4jj\" (UniqueName: \"kubernetes.io/projected/1345181b-31c6-4f82-99c6-cdec28ab1c04-kube-api-access-zc4jj\") on node \"ci-4372.1.0-n-d6d7f926f9\" DevicePath \"\"" Sep 11 00:21:29.276230 kubelet[2738]: I0911 00:21:29.275966 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1345181b-31c6-4f82-99c6-cdec28ab1c04-whisker-backend-key-pair\") on node \"ci-4372.1.0-n-d6d7f926f9\" DevicePath \"\"" Sep 11 00:21:29.401148 containerd[1552]: time="2025-09-11T00:21:29.399682602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e\" id:\"a38937b22a164c8ccbf4cb8a5c7ee033402c30a6d4937747e08c3a9460727a03\" pid:3808 exit_status:1 exited_at:{seconds:1757550089 nanos:367708811}" Sep 11 00:21:29.946485 systemd[1]: Removed slice kubepods-besteffort-pod1345181b_31c6_4f82_99c6_cdec28ab1c04.slice - libcontainer container kubepods-besteffort-pod1345181b_31c6_4f82_99c6_cdec28ab1c04.slice. Sep 11 00:21:30.071289 systemd[1]: Created slice kubepods-besteffort-pod6041e935_4ac6_4e56_a9ce_47e366232301.slice - libcontainer container kubepods-besteffort-pod6041e935_4ac6_4e56_a9ce_47e366232301.slice. Sep 11 00:21:30.085238 kubelet[2738]: I0911 00:21:30.085020 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrtz\" (UniqueName: \"kubernetes.io/projected/6041e935-4ac6-4e56-a9ce-47e366232301-kube-api-access-5xrtz\") pod \"whisker-f4d899447-h85lk\" (UID: \"6041e935-4ac6-4e56-a9ce-47e366232301\") " pod="calico-system/whisker-f4d899447-h85lk" Sep 11 00:21:30.086356 kubelet[2738]: I0911 00:21:30.085371 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6041e935-4ac6-4e56-a9ce-47e366232301-whisker-backend-key-pair\") pod \"whisker-f4d899447-h85lk\" (UID: \"6041e935-4ac6-4e56-a9ce-47e366232301\") " pod="calico-system/whisker-f4d899447-h85lk" Sep 11 00:21:30.087493 kubelet[2738]: I0911 00:21:30.086889 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6041e935-4ac6-4e56-a9ce-47e366232301-whisker-ca-bundle\") pod \"whisker-f4d899447-h85lk\" (UID: \"6041e935-4ac6-4e56-a9ce-47e366232301\") " pod="calico-system/whisker-f4d899447-h85lk" Sep 11 00:21:30.176914 containerd[1552]: time="2025-09-11T00:21:30.176824967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e\" id:\"afda4a6c8150cc9f744dd876835567b2c654d70b3379aa416a7934e064f29c10\" pid:3854 exit_status:1 exited_at:{seconds:1757550090 nanos:176335841}" Sep 11 00:21:30.379594 containerd[1552]: time="2025-09-11T00:21:30.379407014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f4d899447-h85lk,Uid:6041e935-4ac6-4e56-a9ce-47e366232301,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:30.581496 kubelet[2738]: I0911 00:21:30.580358 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1345181b-31c6-4f82-99c6-cdec28ab1c04" path="/var/lib/kubelet/pods/1345181b-31c6-4f82-99c6-cdec28ab1c04/volumes" Sep 11 00:21:30.834187 systemd-networkd[1443]: calicb0aa698383: Link UP Sep 11 00:21:30.837395 systemd-networkd[1443]: calicb0aa698383: Gained carrier Sep 11 00:21:30.882077 containerd[1552]: 2025-09-11 00:21:30.449 [INFO][3871] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:21:30.882077 containerd[1552]: 2025-09-11 00:21:30.483 [INFO][3871] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0 whisker-f4d899447- calico-system 6041e935-4ac6-4e56-a9ce-47e366232301 966 0 2025-09-11 00:21:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f4d899447 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.1.0-n-d6d7f926f9 whisker-f4d899447-h85lk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicb0aa698383 [] [] }} ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Namespace="calico-system" Pod="whisker-f4d899447-h85lk" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-" Sep 11 00:21:30.882077 containerd[1552]: 2025-09-11 00:21:30.483 [INFO][3871] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Namespace="calico-system" Pod="whisker-f4d899447-h85lk" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" Sep 11 00:21:30.882077 containerd[1552]: 2025-09-11 00:21:30.707 [INFO][3879] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" HandleID="k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.711 [INFO][3879] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" HandleID="k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c2c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-d6d7f926f9", "pod":"whisker-f4d899447-h85lk", "timestamp":"2025-09-11 00:21:30.707131928 +0000 UTC"}, Hostname:"ci-4372.1.0-n-d6d7f926f9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.711 [INFO][3879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.712 [INFO][3879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.713 [INFO][3879] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-d6d7f926f9' Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.737 [INFO][3879] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.755 [INFO][3879] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.765 [INFO][3879] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.769 [INFO][3879] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.882658 containerd[1552]: 2025-09-11 00:21:30.773 [INFO][3879] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.883222 containerd[1552]: 2025-09-11 00:21:30.774 [INFO][3879] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.883222 containerd[1552]: 2025-09-11 00:21:30.777 [INFO][3879] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca Sep 11 00:21:30.883222 containerd[1552]: 2025-09-11 00:21:30.785 [INFO][3879] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.883222 containerd[1552]: 2025-09-11 00:21:30.795 [INFO][3879] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.1/26] block=192.168.13.0/26 handle="k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.883222 containerd[1552]: 2025-09-11 00:21:30.795 [INFO][3879] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.1/26] handle="k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:30.883222 containerd[1552]: 2025-09-11 00:21:30.796 [INFO][3879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:21:30.883222 containerd[1552]: 2025-09-11 00:21:30.796 [INFO][3879] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.1/26] IPv6=[] ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" HandleID="k8s-pod-network.10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" Sep 11 00:21:30.883411 containerd[1552]: 2025-09-11 00:21:30.803 [INFO][3871] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Namespace="calico-system" Pod="whisker-f4d899447-h85lk" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0", GenerateName:"whisker-f4d899447-", Namespace:"calico-system", SelfLink:"", UID:"6041e935-4ac6-4e56-a9ce-47e366232301", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f4d899447", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"", Pod:"whisker-f4d899447-h85lk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicb0aa698383", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:30.883411 containerd[1552]: 2025-09-11 00:21:30.803 [INFO][3871] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.1/32] ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Namespace="calico-system" Pod="whisker-f4d899447-h85lk" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" Sep 11 00:21:30.883504 containerd[1552]: 2025-09-11 00:21:30.803 [INFO][3871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb0aa698383 ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Namespace="calico-system" Pod="whisker-f4d899447-h85lk" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" Sep 11 00:21:30.883504 containerd[1552]: 2025-09-11 00:21:30.838 [INFO][3871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Namespace="calico-system" Pod="whisker-f4d899447-h85lk" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" Sep 11 00:21:30.883572 containerd[1552]: 2025-09-11 00:21:30.840 [INFO][3871] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Namespace="calico-system" Pod="whisker-f4d899447-h85lk" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0", GenerateName:"whisker-f4d899447-", Namespace:"calico-system", SelfLink:"", UID:"6041e935-4ac6-4e56-a9ce-47e366232301", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f4d899447", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca", Pod:"whisker-f4d899447-h85lk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.13.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicb0aa698383", MAC:"c6:87:fb:42:e1:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:30.883637 containerd[1552]: 2025-09-11 00:21:30.868 [INFO][3871] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" Namespace="calico-system" Pod="whisker-f4d899447-h85lk" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-whisker--f4d899447--h85lk-eth0" Sep 11 00:21:30.994817 containerd[1552]: time="2025-09-11T00:21:30.992764458Z" level=info msg="connecting to shim 10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca" address="unix:///run/containerd/s/2ff49b5e1c9401edc9ef369c9e5b259697afd472be1e42dd05f7d87fe680ab49" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:31.096907 systemd[1]: Started cri-containerd-10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca.scope - libcontainer container 10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca. Sep 11 00:21:31.239861 containerd[1552]: time="2025-09-11T00:21:31.239637109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f4d899447-h85lk,Uid:6041e935-4ac6-4e56-a9ce-47e366232301,Namespace:calico-system,Attempt:0,} returns sandbox id \"10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca\"" Sep 11 00:21:31.246690 containerd[1552]: time="2025-09-11T00:21:31.246220104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:21:31.690351 containerd[1552]: time="2025-09-11T00:21:31.690300334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e\" id:\"73ed251a226cd926027e6a20d6f935b81e9839b574f16063e11634ffbc04e86c\" pid:4021 exit_status:1 exited_at:{seconds:1757550091 nanos:689140715}" Sep 11 00:21:31.959159 systemd-networkd[1443]: calicb0aa698383: Gained IPv6LL Sep 11 00:21:32.194835 systemd-networkd[1443]: vxlan.calico: Link UP Sep 11 00:21:32.194847 systemd-networkd[1443]: vxlan.calico: Gained carrier Sep 11 00:21:32.567307 containerd[1552]: time="2025-09-11T00:21:32.567234001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6km7x,Uid:36207bbd-453c-4e37-bd0f-a6d97176618f,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:32.836952 systemd-networkd[1443]: cali3dbe481c36d: Link UP Sep 11 00:21:32.840840 systemd-networkd[1443]: cali3dbe481c36d: Gained carrier Sep 11 00:21:32.893672 containerd[1552]: 2025-09-11 00:21:32.627 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0 goldmane-54d579b49d- calico-system 36207bbd-453c-4e37-bd0f-a6d97176618f 894 0 2025-09-11 00:21:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.1.0-n-d6d7f926f9 goldmane-54d579b49d-6km7x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3dbe481c36d [] [] }} ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Namespace="calico-system" Pod="goldmane-54d579b49d-6km7x" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-" Sep 11 00:21:32.893672 containerd[1552]: 2025-09-11 00:21:32.628 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Namespace="calico-system" Pod="goldmane-54d579b49d-6km7x" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" Sep 11 00:21:32.893672 containerd[1552]: 2025-09-11 00:21:32.725 [INFO][4130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" HandleID="k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.726 [INFO][4130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" HandleID="k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-d6d7f926f9", "pod":"goldmane-54d579b49d-6km7x", "timestamp":"2025-09-11 00:21:32.725384008 +0000 UTC"}, Hostname:"ci-4372.1.0-n-d6d7f926f9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.726 [INFO][4130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.726 [INFO][4130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.727 [INFO][4130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-d6d7f926f9' Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.753 [INFO][4130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.766 [INFO][4130] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.774 [INFO][4130] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.778 [INFO][4130] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.894265 containerd[1552]: 2025-09-11 00:21:32.782 [INFO][4130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.895187 containerd[1552]: 2025-09-11 00:21:32.782 [INFO][4130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.895187 containerd[1552]: 2025-09-11 00:21:32.785 [INFO][4130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887 Sep 11 00:21:32.895187 containerd[1552]: 2025-09-11 00:21:32.792 [INFO][4130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.895187 containerd[1552]: 2025-09-11 00:21:32.803 [INFO][4130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.2/26] block=192.168.13.0/26 handle="k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.895187 containerd[1552]: 2025-09-11 00:21:32.803 [INFO][4130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.2/26] handle="k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:32.895187 containerd[1552]: 2025-09-11 00:21:32.804 [INFO][4130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:21:32.895187 containerd[1552]: 2025-09-11 00:21:32.804 [INFO][4130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.2/26] IPv6=[] ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" HandleID="k8s-pod-network.5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" Sep 11 00:21:32.895652 containerd[1552]: 2025-09-11 00:21:32.819 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Namespace="calico-system" Pod="goldmane-54d579b49d-6km7x" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"36207bbd-453c-4e37-bd0f-a6d97176618f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"", Pod:"goldmane-54d579b49d-6km7x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3dbe481c36d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:32.895652 containerd[1552]: 2025-09-11 00:21:32.820 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.2/32] ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Namespace="calico-system" Pod="goldmane-54d579b49d-6km7x" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" Sep 11 00:21:32.895794 containerd[1552]: 2025-09-11 00:21:32.820 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3dbe481c36d ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Namespace="calico-system" Pod="goldmane-54d579b49d-6km7x" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" Sep 11 00:21:32.895794 containerd[1552]: 2025-09-11 00:21:32.847 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Namespace="calico-system" Pod="goldmane-54d579b49d-6km7x" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" Sep 11 00:21:32.895872 containerd[1552]: 2025-09-11 00:21:32.854 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Namespace="calico-system" Pod="goldmane-54d579b49d-6km7x" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"36207bbd-453c-4e37-bd0f-a6d97176618f", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887", Pod:"goldmane-54d579b49d-6km7x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.13.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3dbe481c36d", MAC:"fe:18:a1:bc:45:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:32.896254 containerd[1552]: 2025-09-11 00:21:32.878 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" Namespace="calico-system" Pod="goldmane-54d579b49d-6km7x" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-goldmane--54d579b49d--6km7x-eth0" Sep 11 00:21:33.032745 containerd[1552]: time="2025-09-11T00:21:33.030874763Z" level=info msg="connecting to shim 5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887" address="unix:///run/containerd/s/608c9f240eaa0d5f57fb23a65208c9abaa7aae26882e3f259eb7eeafb0386694" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:33.132945 systemd[1]: Started cri-containerd-5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887.scope - libcontainer container 5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887. Sep 11 00:21:33.229345 containerd[1552]: time="2025-09-11T00:21:33.229213639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:33.231153 containerd[1552]: time="2025-09-11T00:21:33.231054383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 11 00:21:33.232576 containerd[1552]: time="2025-09-11T00:21:33.232482768Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:33.237983 containerd[1552]: time="2025-09-11T00:21:33.237884783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:33.239444 systemd-networkd[1443]: vxlan.calico: Gained IPv6LL Sep 11 00:21:33.248233 containerd[1552]: time="2025-09-11T00:21:33.248061820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.001788773s" Sep 11 00:21:33.248233 containerd[1552]: time="2025-09-11T00:21:33.248117055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 11 00:21:33.262403 containerd[1552]: time="2025-09-11T00:21:33.262271842Z" level=info msg="CreateContainer within sandbox \"10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:21:33.279907 containerd[1552]: time="2025-09-11T00:21:33.279839928Z" level=info msg="Container 9aae2c14453f67866b34c2364eb0065d99bba03911b0a87b0bb3ad28a855b794: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:33.287369 containerd[1552]: time="2025-09-11T00:21:33.287285826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6km7x,Uid:36207bbd-453c-4e37-bd0f-a6d97176618f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887\"" Sep 11 00:21:33.292501 containerd[1552]: time="2025-09-11T00:21:33.291943782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:21:33.305921 containerd[1552]: time="2025-09-11T00:21:33.305877144Z" level=info msg="CreateContainer within sandbox \"10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9aae2c14453f67866b34c2364eb0065d99bba03911b0a87b0bb3ad28a855b794\"" Sep 11 00:21:33.312370 containerd[1552]: time="2025-09-11T00:21:33.311942314Z" level=info msg="StartContainer for \"9aae2c14453f67866b34c2364eb0065d99bba03911b0a87b0bb3ad28a855b794\"" Sep 11 00:21:33.315957 containerd[1552]: time="2025-09-11T00:21:33.315901383Z" level=info msg="connecting to shim 9aae2c14453f67866b34c2364eb0065d99bba03911b0a87b0bb3ad28a855b794" address="unix:///run/containerd/s/2ff49b5e1c9401edc9ef369c9e5b259697afd472be1e42dd05f7d87fe680ab49" protocol=ttrpc version=3 Sep 11 00:21:33.342828 systemd[1]: Started cri-containerd-9aae2c14453f67866b34c2364eb0065d99bba03911b0a87b0bb3ad28a855b794.scope - libcontainer container 9aae2c14453f67866b34c2364eb0065d99bba03911b0a87b0bb3ad28a855b794. Sep 11 00:21:33.433964 containerd[1552]: time="2025-09-11T00:21:33.433664319Z" level=info msg="StartContainer for \"9aae2c14453f67866b34c2364eb0065d99bba03911b0a87b0bb3ad28a855b794\" returns successfully" Sep 11 00:21:33.567110 containerd[1552]: time="2025-09-11T00:21:33.566782612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd84ccdd6-j58dq,Uid:8e74a5fd-f569-4d8d-b0ea-4824cc4876e2,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:33.728140 systemd-networkd[1443]: cali7227602b054: Link UP Sep 11 00:21:33.729714 systemd-networkd[1443]: cali7227602b054: Gained carrier Sep 11 00:21:33.753917 containerd[1552]: 2025-09-11 00:21:33.619 [INFO][4267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0 calico-kube-controllers-7bd84ccdd6- calico-system 8e74a5fd-f569-4d8d-b0ea-4824cc4876e2 897 0 2025-09-11 00:21:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bd84ccdd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.1.0-n-d6d7f926f9 calico-kube-controllers-7bd84ccdd6-j58dq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7227602b054 [] [] }} ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Namespace="calico-system" Pod="calico-kube-controllers-7bd84ccdd6-j58dq" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-" Sep 11 00:21:33.753917 containerd[1552]: 2025-09-11 00:21:33.620 [INFO][4267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Namespace="calico-system" Pod="calico-kube-controllers-7bd84ccdd6-j58dq" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" Sep 11 00:21:33.753917 containerd[1552]: 2025-09-11 00:21:33.669 [INFO][4279] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" HandleID="k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.669 [INFO][4279] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" HandleID="k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-d6d7f926f9", "pod":"calico-kube-controllers-7bd84ccdd6-j58dq", "timestamp":"2025-09-11 00:21:33.669518358 +0000 UTC"}, Hostname:"ci-4372.1.0-n-d6d7f926f9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.669 [INFO][4279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.669 [INFO][4279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.669 [INFO][4279] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-d6d7f926f9' Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.679 [INFO][4279] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.686 [INFO][4279] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.695 [INFO][4279] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.698 [INFO][4279] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755147 containerd[1552]: 2025-09-11 00:21:33.702 [INFO][4279] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755566 containerd[1552]: 2025-09-11 00:21:33.702 [INFO][4279] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755566 containerd[1552]: 2025-09-11 00:21:33.706 [INFO][4279] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40 Sep 11 00:21:33.755566 containerd[1552]: 2025-09-11 00:21:33.711 [INFO][4279] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755566 containerd[1552]: 2025-09-11 00:21:33.719 [INFO][4279] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.3/26] block=192.168.13.0/26 handle="k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755566 containerd[1552]: 2025-09-11 00:21:33.719 [INFO][4279] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.3/26] handle="k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:33.755566 containerd[1552]: 2025-09-11 00:21:33.719 [INFO][4279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:21:33.755566 containerd[1552]: 2025-09-11 00:21:33.719 [INFO][4279] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.3/26] IPv6=[] ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" HandleID="k8s-pod-network.49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" Sep 11 00:21:33.755817 containerd[1552]: 2025-09-11 00:21:33.722 [INFO][4267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Namespace="calico-system" Pod="calico-kube-controllers-7bd84ccdd6-j58dq" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0", GenerateName:"calico-kube-controllers-7bd84ccdd6-", Namespace:"calico-system", SelfLink:"", UID:"8e74a5fd-f569-4d8d-b0ea-4824cc4876e2", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd84ccdd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"", Pod:"calico-kube-controllers-7bd84ccdd6-j58dq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7227602b054", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:33.755917 containerd[1552]: 2025-09-11 00:21:33.722 [INFO][4267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.3/32] ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Namespace="calico-system" Pod="calico-kube-controllers-7bd84ccdd6-j58dq" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" Sep 11 00:21:33.755917 containerd[1552]: 2025-09-11 00:21:33.723 [INFO][4267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7227602b054 ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Namespace="calico-system" Pod="calico-kube-controllers-7bd84ccdd6-j58dq" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" Sep 11 00:21:33.755917 containerd[1552]: 2025-09-11 00:21:33.730 [INFO][4267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Namespace="calico-system" Pod="calico-kube-controllers-7bd84ccdd6-j58dq" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" Sep 11 00:21:33.756022 containerd[1552]: 2025-09-11 00:21:33.731 [INFO][4267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Namespace="calico-system" Pod="calico-kube-controllers-7bd84ccdd6-j58dq" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0", GenerateName:"calico-kube-controllers-7bd84ccdd6-", Namespace:"calico-system", SelfLink:"", UID:"8e74a5fd-f569-4d8d-b0ea-4824cc4876e2", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bd84ccdd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40", Pod:"calico-kube-controllers-7bd84ccdd6-j58dq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.13.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7227602b054", MAC:"86:ca:94:7f:c0:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:33.756120 containerd[1552]: 2025-09-11 00:21:33.748 [INFO][4267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" Namespace="calico-system" Pod="calico-kube-controllers-7bd84ccdd6-j58dq" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--kube--controllers--7bd84ccdd6--j58dq-eth0" Sep 11 00:21:33.802101 containerd[1552]: time="2025-09-11T00:21:33.802022305Z" level=info msg="connecting to shim 49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40" address="unix:///run/containerd/s/ef966534f5cf87793c68b29742d98b9ab48041f372764400e24b4dcb6c6c5332" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:33.850870 systemd[1]: Started cri-containerd-49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40.scope - libcontainer container 49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40. Sep 11 00:21:33.941835 containerd[1552]: time="2025-09-11T00:21:33.941713627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bd84ccdd6-j58dq,Uid:8e74a5fd-f569-4d8d-b0ea-4824cc4876e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40\"" Sep 11 00:21:34.070795 systemd-networkd[1443]: cali3dbe481c36d: Gained IPv6LL Sep 11 00:21:34.566729 kubelet[2738]: E0911 00:21:34.565882 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:34.568912 containerd[1552]: time="2025-09-11T00:21:34.568323158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-59lvw,Uid:412e1c59-365c-4eba-a906-0bb277b80086,Namespace:kube-system,Attempt:0,}" Sep 11 00:21:34.569772 containerd[1552]: time="2025-09-11T00:21:34.569399448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bf468dc-mhkf5,Uid:12046b83-3ea2-4fdc-9dea-dc738a353e9a,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:21:34.571320 containerd[1552]: time="2025-09-11T00:21:34.571221773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bf468dc-pklpb,Uid:322511c7-2325-4fa4-b2e6-292071b9159a,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:21:34.967903 systemd-networkd[1443]: cali814596fa7e8: Link UP Sep 11 00:21:34.971500 systemd-networkd[1443]: cali814596fa7e8: Gained carrier Sep 11 00:21:35.026860 containerd[1552]: 2025-09-11 00:21:34.677 [INFO][4346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0 calico-apiserver-6d5bf468dc- calico-apiserver 322511c7-2325-4fa4-b2e6-292071b9159a 893 0 2025-09-11 00:21:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5bf468dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-d6d7f926f9 calico-apiserver-6d5bf468dc-pklpb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali814596fa7e8 [] [] }} ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-pklpb" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-" Sep 11 00:21:35.026860 containerd[1552]: 2025-09-11 00:21:34.679 [INFO][4346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-pklpb" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" Sep 11 00:21:35.026860 containerd[1552]: 2025-09-11 00:21:34.837 [INFO][4373] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" HandleID="k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.838 [INFO][4373] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" HandleID="k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-d6d7f926f9", "pod":"calico-apiserver-6d5bf468dc-pklpb", "timestamp":"2025-09-11 00:21:34.837758433 +0000 UTC"}, Hostname:"ci-4372.1.0-n-d6d7f926f9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.839 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.839 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.839 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-d6d7f926f9' Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.865 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.877 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.891 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.901 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.028632 containerd[1552]: 2025-09-11 00:21:34.912 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.029073 containerd[1552]: 2025-09-11 00:21:34.912 [INFO][4373] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.029073 containerd[1552]: 2025-09-11 00:21:34.920 [INFO][4373] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d Sep 11 00:21:35.029073 containerd[1552]: 2025-09-11 00:21:34.932 [INFO][4373] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.029073 containerd[1552]: 2025-09-11 00:21:34.952 [INFO][4373] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.4/26] block=192.168.13.0/26 handle="k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.029073 containerd[1552]: 2025-09-11 00:21:34.953 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.4/26] handle="k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.029073 containerd[1552]: 2025-09-11 00:21:34.953 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:21:35.029073 containerd[1552]: 2025-09-11 00:21:34.953 [INFO][4373] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.4/26] IPv6=[] ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" HandleID="k8s-pod-network.4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" Sep 11 00:21:35.029305 containerd[1552]: 2025-09-11 00:21:34.957 [INFO][4346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-pklpb" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0", GenerateName:"calico-apiserver-6d5bf468dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"322511c7-2325-4fa4-b2e6-292071b9159a", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5bf468dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"", Pod:"calico-apiserver-6d5bf468dc-pklpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali814596fa7e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:35.029420 containerd[1552]: 2025-09-11 00:21:34.958 [INFO][4346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.4/32] ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-pklpb" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" Sep 11 00:21:35.029420 containerd[1552]: 2025-09-11 00:21:34.958 [INFO][4346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali814596fa7e8 ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-pklpb" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" Sep 11 00:21:35.029420 containerd[1552]: 2025-09-11 00:21:34.973 [INFO][4346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-pklpb" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" Sep 11 00:21:35.030313 containerd[1552]: 2025-09-11 00:21:34.974 [INFO][4346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-pklpb" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0", GenerateName:"calico-apiserver-6d5bf468dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"322511c7-2325-4fa4-b2e6-292071b9159a", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5bf468dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d", Pod:"calico-apiserver-6d5bf468dc-pklpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali814596fa7e8", MAC:"26:24:5c:4b:7e:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:35.031154 containerd[1552]: 2025-09-11 00:21:35.011 [INFO][4346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-pklpb" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--pklpb-eth0" Sep 11 00:21:35.207428 containerd[1552]: time="2025-09-11T00:21:35.203257507Z" level=info msg="connecting to shim 4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d" address="unix:///run/containerd/s/5b4243b489821df15094301b6c190dc34abb0b855a39e0b44070b3feb16c0ec9" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:35.228245 systemd-networkd[1443]: cali147d3afee85: Link UP Sep 11 00:21:35.238054 systemd-networkd[1443]: cali147d3afee85: Gained carrier Sep 11 00:21:35.288116 systemd-networkd[1443]: cali7227602b054: Gained IPv6LL Sep 11 00:21:35.294798 systemd[1]: Started cri-containerd-4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d.scope - libcontainer container 4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d. Sep 11 00:21:35.325756 containerd[1552]: 2025-09-11 00:21:34.743 [INFO][4358] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0 coredns-674b8bbfcf- kube-system 412e1c59-365c-4eba-a906-0bb277b80086 889 0 2025-09-11 00:20:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-d6d7f926f9 coredns-674b8bbfcf-59lvw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali147d3afee85 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Namespace="kube-system" Pod="coredns-674b8bbfcf-59lvw" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-" Sep 11 00:21:35.325756 containerd[1552]: 2025-09-11 00:21:34.743 [INFO][4358] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Namespace="kube-system" Pod="coredns-674b8bbfcf-59lvw" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" Sep 11 00:21:35.325756 containerd[1552]: 2025-09-11 00:21:34.840 [INFO][4382] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" HandleID="k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:34.841 [INFO][4382] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" HandleID="k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf660), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-d6d7f926f9", "pod":"coredns-674b8bbfcf-59lvw", "timestamp":"2025-09-11 00:21:34.838727214 +0000 UTC"}, Hostname:"ci-4372.1.0-n-d6d7f926f9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:34.842 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:34.954 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:34.954 [INFO][4382] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-d6d7f926f9' Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:34.994 [INFO][4382] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:35.025 [INFO][4382] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:35.045 [INFO][4382] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:35.057 [INFO][4382] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326332 containerd[1552]: 2025-09-11 00:21:35.070 [INFO][4382] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326607 containerd[1552]: 2025-09-11 00:21:35.072 [INFO][4382] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326607 containerd[1552]: 2025-09-11 00:21:35.083 [INFO][4382] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135 Sep 11 00:21:35.326607 containerd[1552]: 2025-09-11 00:21:35.111 [INFO][4382] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326607 containerd[1552]: 2025-09-11 00:21:35.141 [INFO][4382] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.5/26] block=192.168.13.0/26 handle="k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326607 containerd[1552]: 2025-09-11 00:21:35.141 [INFO][4382] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.5/26] handle="k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.326607 containerd[1552]: 2025-09-11 00:21:35.141 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:21:35.326607 containerd[1552]: 2025-09-11 00:21:35.141 [INFO][4382] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.5/26] IPv6=[] ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" HandleID="k8s-pod-network.5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" Sep 11 00:21:35.326794 containerd[1552]: 2025-09-11 00:21:35.162 [INFO][4358] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Namespace="kube-system" Pod="coredns-674b8bbfcf-59lvw" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"412e1c59-365c-4eba-a906-0bb277b80086", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"", Pod:"coredns-674b8bbfcf-59lvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali147d3afee85", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:35.326794 containerd[1552]: 2025-09-11 00:21:35.162 [INFO][4358] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.5/32] ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Namespace="kube-system" Pod="coredns-674b8bbfcf-59lvw" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" Sep 11 00:21:35.326794 containerd[1552]: 2025-09-11 00:21:35.162 [INFO][4358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali147d3afee85 ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Namespace="kube-system" Pod="coredns-674b8bbfcf-59lvw" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" Sep 11 00:21:35.326794 containerd[1552]: 2025-09-11 00:21:35.243 [INFO][4358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Namespace="kube-system" Pod="coredns-674b8bbfcf-59lvw" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" Sep 11 00:21:35.326794 containerd[1552]: 2025-09-11 00:21:35.248 [INFO][4358] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Namespace="kube-system" Pod="coredns-674b8bbfcf-59lvw" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"412e1c59-365c-4eba-a906-0bb277b80086", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135", Pod:"coredns-674b8bbfcf-59lvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali147d3afee85", MAC:"0a:8a:11:71:06:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:35.326794 containerd[1552]: 2025-09-11 00:21:35.308 [INFO][4358] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" Namespace="kube-system" Pod="coredns-674b8bbfcf-59lvw" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--59lvw-eth0" Sep 11 00:21:35.377977 systemd-networkd[1443]: cali49f9a552061: Link UP Sep 11 00:21:35.381098 systemd-networkd[1443]: cali49f9a552061: Gained carrier Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:34.740 [INFO][4337] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0 calico-apiserver-6d5bf468dc- calico-apiserver 12046b83-3ea2-4fdc-9dea-dc738a353e9a 895 0 2025-09-11 00:21:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5bf468dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-n-d6d7f926f9 calico-apiserver-6d5bf468dc-mhkf5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali49f9a552061 [] [] }} ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-mhkf5" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:34.742 [INFO][4337] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-mhkf5" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:34.888 [INFO][4380] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" HandleID="k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:34.895 [INFO][4380] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" HandleID="k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000322180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-n-d6d7f926f9", "pod":"calico-apiserver-6d5bf468dc-mhkf5", "timestamp":"2025-09-11 00:21:34.888294895 +0000 UTC"}, Hostname:"ci-4372.1.0-n-d6d7f926f9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:34.895 [INFO][4380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.141 [INFO][4380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.142 [INFO][4380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-d6d7f926f9' Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.188 [INFO][4380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.208 [INFO][4380] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.246 [INFO][4380] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.253 [INFO][4380] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.262 [INFO][4380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.262 [INFO][4380] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.266 [INFO][4380] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.301 [INFO][4380] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.343 [INFO][4380] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.6/26] block=192.168.13.0/26 handle="k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.344 [INFO][4380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.6/26] handle="k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.344 [INFO][4380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:21:35.478166 containerd[1552]: 2025-09-11 00:21:35.345 [INFO][4380] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.6/26] IPv6=[] ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" HandleID="k8s-pod-network.698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" Sep 11 00:21:35.482000 containerd[1552]: 2025-09-11 00:21:35.360 [INFO][4337] cni-plugin/k8s.go 418: Populated endpoint ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-mhkf5" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0", GenerateName:"calico-apiserver-6d5bf468dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"12046b83-3ea2-4fdc-9dea-dc738a353e9a", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5bf468dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"", Pod:"calico-apiserver-6d5bf468dc-mhkf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49f9a552061", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:35.482000 containerd[1552]: 2025-09-11 00:21:35.361 [INFO][4337] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.6/32] ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-mhkf5" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" Sep 11 00:21:35.482000 containerd[1552]: 2025-09-11 00:21:35.361 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49f9a552061 ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-mhkf5" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" Sep 11 00:21:35.482000 containerd[1552]: 2025-09-11 00:21:35.383 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-mhkf5" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" Sep 11 00:21:35.482000 containerd[1552]: 2025-09-11 00:21:35.384 [INFO][4337] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-mhkf5" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0", GenerateName:"calico-apiserver-6d5bf468dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"12046b83-3ea2-4fdc-9dea-dc738a353e9a", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5bf468dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db", Pod:"calico-apiserver-6d5bf468dc-mhkf5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.13.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali49f9a552061", MAC:"5e:fb:38:61:ef:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:35.482000 containerd[1552]: 2025-09-11 00:21:35.422 [INFO][4337] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" Namespace="calico-apiserver" Pod="calico-apiserver-6d5bf468dc-mhkf5" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-calico--apiserver--6d5bf468dc--mhkf5-eth0" Sep 11 00:21:35.539715 containerd[1552]: time="2025-09-11T00:21:35.539472335Z" level=info msg="connecting to shim 5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135" address="unix:///run/containerd/s/386223564a89d336af15adbc5ac18f89acfe883d4a50418b277c2b283e55ead1" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:35.565249 kubelet[2738]: E0911 00:21:35.565199 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:35.581477 containerd[1552]: time="2025-09-11T00:21:35.579499285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtc4d,Uid:1cafb586-b280-45c4-b4f2-b083ee293a60,Namespace:kube-system,Attempt:0,}" Sep 11 00:21:35.606625 containerd[1552]: time="2025-09-11T00:21:35.604804543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mqnt4,Uid:7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c,Namespace:calico-system,Attempt:0,}" Sep 11 00:21:35.606625 containerd[1552]: time="2025-09-11T00:21:35.605121467Z" level=info msg="connecting to shim 698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db" address="unix:///run/containerd/s/89023da76aca30091b8a6b031ddc96b7c26a13549cbb1583b953ebf1d009135e" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:35.795022 systemd[1]: Started cri-containerd-5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135.scope - libcontainer container 5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135. Sep 11 00:21:35.821917 systemd[1]: Started cri-containerd-698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db.scope - libcontainer container 698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db. Sep 11 00:21:35.904713 containerd[1552]: time="2025-09-11T00:21:35.904453660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bf468dc-pklpb,Uid:322511c7-2325-4fa4-b2e6-292071b9159a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d\"" Sep 11 00:21:36.045160 containerd[1552]: time="2025-09-11T00:21:36.045089547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-59lvw,Uid:412e1c59-365c-4eba-a906-0bb277b80086,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135\"" Sep 11 00:21:36.050219 kubelet[2738]: E0911 00:21:36.049918 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:36.076430 containerd[1552]: time="2025-09-11T00:21:36.076183199Z" level=info msg="CreateContainer within sandbox \"5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:21:36.284300 containerd[1552]: time="2025-09-11T00:21:36.284238568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5bf468dc-mhkf5,Uid:12046b83-3ea2-4fdc-9dea-dc738a353e9a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db\"" Sep 11 00:21:36.326482 containerd[1552]: time="2025-09-11T00:21:36.326331480Z" level=info msg="Container beb1cd635f12d847f98e99df69499781bea38fa1f205b2a752b4300cba14cd4d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:36.363405 systemd-networkd[1443]: cali8f19db14919: Link UP Sep 11 00:21:36.364747 systemd-networkd[1443]: cali8f19db14919: Gained carrier Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:35.969 [INFO][4497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0 coredns-674b8bbfcf- kube-system 1cafb586-b280-45c4-b4f2-b083ee293a60 899 0 2025-09-11 00:20:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-n-d6d7f926f9 coredns-674b8bbfcf-rtc4d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8f19db14919 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtc4d" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:35.971 [INFO][4497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtc4d" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.225 [INFO][4581] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" HandleID="k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.226 [INFO][4581] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" HandleID="k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3740), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-n-d6d7f926f9", "pod":"coredns-674b8bbfcf-rtc4d", "timestamp":"2025-09-11 00:21:36.225824643 +0000 UTC"}, Hostname:"ci-4372.1.0-n-d6d7f926f9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.226 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.226 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.226 [INFO][4581] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-d6d7f926f9' Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.243 [INFO][4581] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.258 [INFO][4581] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.277 [INFO][4581] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.288 [INFO][4581] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.296 [INFO][4581] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.296 [INFO][4581] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.304 [INFO][4581] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935 Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.320 [INFO][4581] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.346 [INFO][4581] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.7/26] block=192.168.13.0/26 handle="k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.347 [INFO][4581] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.7/26] handle="k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.347 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:21:36.432250 containerd[1552]: 2025-09-11 00:21:36.347 [INFO][4581] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.7/26] IPv6=[] ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" HandleID="k8s-pod-network.dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" Sep 11 00:21:36.434204 containerd[1552]: 2025-09-11 00:21:36.350 [INFO][4497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtc4d" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1cafb586-b280-45c4-b4f2-b083ee293a60", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"", Pod:"coredns-674b8bbfcf-rtc4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f19db14919", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:36.434204 containerd[1552]: 2025-09-11 00:21:36.350 [INFO][4497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.7/32] ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtc4d" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" Sep 11 00:21:36.434204 containerd[1552]: 2025-09-11 00:21:36.350 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f19db14919 ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtc4d" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" Sep 11 00:21:36.434204 containerd[1552]: 2025-09-11 00:21:36.366 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtc4d" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" Sep 11 00:21:36.434204 containerd[1552]: 2025-09-11 00:21:36.376 [INFO][4497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtc4d" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1cafb586-b280-45c4-b4f2-b083ee293a60", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935", Pod:"coredns-674b8bbfcf-rtc4d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.13.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f19db14919", MAC:"b6:b8:69:57:50:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:36.434204 containerd[1552]: 2025-09-11 00:21:36.413 [INFO][4497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtc4d" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-coredns--674b8bbfcf--rtc4d-eth0" Sep 11 00:21:36.529024 containerd[1552]: time="2025-09-11T00:21:36.528258334Z" level=info msg="connecting to shim dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935" address="unix:///run/containerd/s/2de5e9bfd360b92f7c18871bc8a623dc10334f38326441c4069669af76fc0d26" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:36.535864 containerd[1552]: time="2025-09-11T00:21:36.535804030Z" level=info msg="CreateContainer within sandbox \"5ef02ac75a2d8bfdfe6fb35fbcd64cfead44317f549805bc0d1344f0b97e8135\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"beb1cd635f12d847f98e99df69499781bea38fa1f205b2a752b4300cba14cd4d\"" Sep 11 00:21:36.538137 containerd[1552]: time="2025-09-11T00:21:36.538019813Z" level=info msg="StartContainer for \"beb1cd635f12d847f98e99df69499781bea38fa1f205b2a752b4300cba14cd4d\"" Sep 11 00:21:36.561275 systemd-networkd[1443]: cali5285ceae801: Link UP Sep 11 00:21:36.563090 systemd-networkd[1443]: cali5285ceae801: Gained carrier Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.059 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0 csi-node-driver- calico-system 7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c 722 0 2025-09-11 00:21:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.1.0-n-d6d7f926f9 csi-node-driver-mqnt4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5285ceae801 [] [] }} ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Namespace="calico-system" Pod="csi-node-driver-mqnt4" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.059 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Namespace="calico-system" Pod="csi-node-driver-mqnt4" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.294 [INFO][4584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" HandleID="k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.298 [INFO][4584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" HandleID="k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000321390), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-n-d6d7f926f9", "pod":"csi-node-driver-mqnt4", "timestamp":"2025-09-11 00:21:36.294178408 +0000 UTC"}, Hostname:"ci-4372.1.0-n-d6d7f926f9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.301 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.348 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.350 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-n-d6d7f926f9' Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.399 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.431 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.461 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.475 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.485 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.13.0/26 host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.485 [INFO][4584] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.13.0/26 handle="k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.493 [INFO][4584] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.509 [INFO][4584] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.13.0/26 handle="k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.540 [INFO][4584] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.13.8/26] block=192.168.13.0/26 handle="k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.541 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.13.8/26] handle="k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" host="ci-4372.1.0-n-d6d7f926f9" Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.541 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:21:36.623608 containerd[1552]: 2025-09-11 00:21:36.542 [INFO][4584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.13.8/26] IPv6=[] ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" HandleID="k8s-pod-network.7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Workload="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" Sep 11 00:21:36.628734 containerd[1552]: 2025-09-11 00:21:36.553 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Namespace="calico-system" Pod="csi-node-driver-mqnt4" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"", Pod:"csi-node-driver-mqnt4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5285ceae801", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:36.628734 containerd[1552]: 2025-09-11 00:21:36.553 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.13.8/32] ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Namespace="calico-system" Pod="csi-node-driver-mqnt4" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" Sep 11 00:21:36.628734 containerd[1552]: 2025-09-11 00:21:36.553 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5285ceae801 ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Namespace="calico-system" Pod="csi-node-driver-mqnt4" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" Sep 11 00:21:36.628734 containerd[1552]: 2025-09-11 00:21:36.564 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Namespace="calico-system" Pod="csi-node-driver-mqnt4" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" Sep 11 00:21:36.628734 containerd[1552]: 2025-09-11 00:21:36.564 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Namespace="calico-system" Pod="csi-node-driver-mqnt4" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-n-d6d7f926f9", ContainerID:"7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e", Pod:"csi-node-driver-mqnt4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.13.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5285ceae801", MAC:"06:d4:09:27:32:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:21:36.628734 containerd[1552]: 2025-09-11 00:21:36.609 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" Namespace="calico-system" Pod="csi-node-driver-mqnt4" WorkloadEndpoint="ci--4372.1.0--n--d6d7f926f9-k8s-csi--node--driver--mqnt4-eth0" Sep 11 00:21:36.697454 systemd-networkd[1443]: cali49f9a552061: Gained IPv6LL Sep 11 00:21:36.737497 containerd[1552]: time="2025-09-11T00:21:36.737318745Z" level=info msg="connecting to shim beb1cd635f12d847f98e99df69499781bea38fa1f205b2a752b4300cba14cd4d" address="unix:///run/containerd/s/386223564a89d336af15adbc5ac18f89acfe883d4a50418b277c2b283e55ead1" protocol=ttrpc version=3 Sep 11 00:21:36.760006 systemd-networkd[1443]: cali814596fa7e8: Gained IPv6LL Sep 11 00:21:36.760470 systemd-networkd[1443]: cali147d3afee85: Gained IPv6LL Sep 11 00:21:36.809409 containerd[1552]: time="2025-09-11T00:21:36.809048072Z" level=info msg="connecting to shim 7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e" address="unix:///run/containerd/s/db253ef2464fa701f718439149e092c2f40fb7da2093737c717f30a5924c81fe" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:21:36.809327 systemd[1]: Started cri-containerd-dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935.scope - libcontainer container dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935. Sep 11 00:21:36.876968 systemd[1]: Started cri-containerd-beb1cd635f12d847f98e99df69499781bea38fa1f205b2a752b4300cba14cd4d.scope - libcontainer container beb1cd635f12d847f98e99df69499781bea38fa1f205b2a752b4300cba14cd4d. Sep 11 00:21:36.928607 systemd[1]: Started cri-containerd-7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e.scope - libcontainer container 7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e. Sep 11 00:21:37.221631 containerd[1552]: time="2025-09-11T00:21:37.220284040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtc4d,Uid:1cafb586-b280-45c4-b4f2-b083ee293a60,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935\"" Sep 11 00:21:37.226003 containerd[1552]: time="2025-09-11T00:21:37.225941390Z" level=info msg="StartContainer for \"beb1cd635f12d847f98e99df69499781bea38fa1f205b2a752b4300cba14cd4d\" returns successfully" Sep 11 00:21:37.228872 kubelet[2738]: E0911 00:21:37.228813 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:37.245271 containerd[1552]: time="2025-09-11T00:21:37.245155940Z" level=info msg="CreateContainer within sandbox \"dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:21:37.275023 containerd[1552]: time="2025-09-11T00:21:37.274874084Z" level=info msg="Container 37712382ea5362951618851c44690bd16eba161786530c3aeba71b8dbd2cbb0e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:37.286118 containerd[1552]: time="2025-09-11T00:21:37.286049981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mqnt4,Uid:7f5d0d1d-c7d3-4aae-bdf9-519143e1b44c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e\"" Sep 11 00:21:37.297891 containerd[1552]: time="2025-09-11T00:21:37.297786040Z" level=info msg="CreateContainer within sandbox \"dbb7a89784f19bcc9338403f5ae71b15e380e259c6098c6c7e4235143e40e935\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"37712382ea5362951618851c44690bd16eba161786530c3aeba71b8dbd2cbb0e\"" Sep 11 00:21:37.300102 containerd[1552]: time="2025-09-11T00:21:37.299755272Z" level=info msg="StartContainer for \"37712382ea5362951618851c44690bd16eba161786530c3aeba71b8dbd2cbb0e\"" Sep 11 00:21:37.308178 containerd[1552]: time="2025-09-11T00:21:37.307763650Z" level=info msg="connecting to shim 37712382ea5362951618851c44690bd16eba161786530c3aeba71b8dbd2cbb0e" address="unix:///run/containerd/s/2de5e9bfd360b92f7c18871bc8a623dc10334f38326441c4069669af76fc0d26" protocol=ttrpc version=3 Sep 11 00:21:37.414445 systemd[1]: Started cri-containerd-37712382ea5362951618851c44690bd16eba161786530c3aeba71b8dbd2cbb0e.scope - libcontainer container 37712382ea5362951618851c44690bd16eba161786530c3aeba71b8dbd2cbb0e. Sep 11 00:21:37.596267 containerd[1552]: time="2025-09-11T00:21:37.596206100Z" level=info msg="StartContainer for \"37712382ea5362951618851c44690bd16eba161786530c3aeba71b8dbd2cbb0e\" returns successfully" Sep 11 00:21:37.719086 systemd-networkd[1443]: cali8f19db14919: Gained IPv6LL Sep 11 00:21:38.103919 systemd-networkd[1443]: cali5285ceae801: Gained IPv6LL Sep 11 00:21:38.250457 kubelet[2738]: E0911 00:21:38.250415 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:38.258609 kubelet[2738]: E0911 00:21:38.258127 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:38.297435 kubelet[2738]: I0911 00:21:38.294752 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-59lvw" podStartSLOduration=47.2947283 podStartE2EDuration="47.2947283s" podCreationTimestamp="2025-09-11 00:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:21:38.294344898 +0000 UTC m=+52.020394883" watchObservedRunningTime="2025-09-11 00:21:38.2947283 +0000 UTC m=+52.020778284" Sep 11 00:21:38.334327 kubelet[2738]: I0911 00:21:38.334238 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rtc4d" podStartSLOduration=47.334210539 podStartE2EDuration="47.334210539s" podCreationTimestamp="2025-09-11 00:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:21:38.333068258 +0000 UTC m=+52.059118246" watchObservedRunningTime="2025-09-11 00:21:38.334210539 +0000 UTC m=+52.060260537" Sep 11 00:21:38.738552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092966016.mount: Deactivated successfully. Sep 11 00:21:39.267270 kubelet[2738]: E0911 00:21:39.267228 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:39.269905 kubelet[2738]: E0911 00:21:39.267348 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:39.527284 containerd[1552]: time="2025-09-11T00:21:39.526302722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:39.528651 containerd[1552]: time="2025-09-11T00:21:39.528587940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 11 00:21:39.529776 containerd[1552]: time="2025-09-11T00:21:39.529733371Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:39.577164 containerd[1552]: time="2025-09-11T00:21:39.536589424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:39.577164 containerd[1552]: time="2025-09-11T00:21:39.540708649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.248702656s" Sep 11 00:21:39.577164 containerd[1552]: time="2025-09-11T00:21:39.540765409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 11 00:21:39.579063 containerd[1552]: time="2025-09-11T00:21:39.578935399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:21:39.586512 containerd[1552]: time="2025-09-11T00:21:39.585987470Z" level=info msg="CreateContainer within sandbox \"5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:21:39.604786 containerd[1552]: time="2025-09-11T00:21:39.604716877Z" level=info msg="Container 15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:39.617327 containerd[1552]: time="2025-09-11T00:21:39.617262177Z" level=info msg="CreateContainer within sandbox \"5fb4e1989fb823aaff8afb907b806717e9d32b230cad70a451b9fd12ffd49887\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3\"" Sep 11 00:21:39.618235 containerd[1552]: time="2025-09-11T00:21:39.618138254Z" level=info msg="StartContainer for \"15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3\"" Sep 11 00:21:39.620988 containerd[1552]: time="2025-09-11T00:21:39.620826629Z" level=info msg="connecting to shim 15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3" address="unix:///run/containerd/s/608c9f240eaa0d5f57fb23a65208c9abaa7aae26882e3f259eb7eeafb0386694" protocol=ttrpc version=3 Sep 11 00:21:39.666890 systemd[1]: Started cri-containerd-15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3.scope - libcontainer container 15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3. Sep 11 00:21:39.762041 containerd[1552]: time="2025-09-11T00:21:39.761965022Z" level=info msg="StartContainer for \"15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3\" returns successfully" Sep 11 00:21:40.277265 kubelet[2738]: E0911 00:21:40.277143 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:40.278934 kubelet[2738]: E0911 00:21:40.278013 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:21:40.304669 kubelet[2738]: I0911 00:21:40.304448 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-6km7x" podStartSLOduration=25.016886315 podStartE2EDuration="31.304424313s" podCreationTimestamp="2025-09-11 00:21:09 +0000 UTC" firstStartedPulling="2025-09-11 00:21:33.291156353 +0000 UTC m=+47.017206347" lastFinishedPulling="2025-09-11 00:21:39.578694363 +0000 UTC m=+53.304744345" observedRunningTime="2025-09-11 00:21:40.30331638 +0000 UTC m=+54.029366379" watchObservedRunningTime="2025-09-11 00:21:40.304424313 +0000 UTC m=+54.030474283" Sep 11 00:21:40.513335 containerd[1552]: time="2025-09-11T00:21:40.513283780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3\" id:\"df34705480b0e517c2c82c8a2b188a8dd6b7f6fc1232024ec9449d4f238ca345\" pid:4845 exit_status:1 exited_at:{seconds:1757550100 nanos:512646649}" Sep 11 00:21:41.492306 containerd[1552]: time="2025-09-11T00:21:41.492223247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3\" id:\"5ffb6436e31bf302f3c4dab7087cb4c47a70676392d92e32bc9aa79b9b867e23\" pid:4868 exit_status:1 exited_at:{seconds:1757550101 nanos:491786748}" Sep 11 00:21:42.854340 containerd[1552]: time="2025-09-11T00:21:42.854282297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3\" id:\"26206f9e9e88828d7a1c2961575c18a000ca12b06936604d9fd48c0266c785d0\" pid:4897 exit_status:1 exited_at:{seconds:1757550102 nanos:851294916}" Sep 11 00:21:42.915647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348758078.mount: Deactivated successfully. Sep 11 00:21:42.950167 containerd[1552]: time="2025-09-11T00:21:42.949999531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:42.951463 containerd[1552]: time="2025-09-11T00:21:42.951422639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 11 00:21:42.951799 containerd[1552]: time="2025-09-11T00:21:42.951771952Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:42.967689 containerd[1552]: time="2025-09-11T00:21:42.967632591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:42.967976 systemd[1]: Started sshd@8-137.184.47.128:22-147.75.109.163:59446.service - OpenSSH per-connection server daemon (147.75.109.163:59446). Sep 11 00:21:42.970020 containerd[1552]: time="2025-09-11T00:21:42.969947450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.390960043s" Sep 11 00:21:42.970795 containerd[1552]: time="2025-09-11T00:21:42.970601325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 11 00:21:42.976992 containerd[1552]: time="2025-09-11T00:21:42.976692332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:21:42.986734 containerd[1552]: time="2025-09-11T00:21:42.986120947Z" level=info msg="CreateContainer within sandbox \"10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:21:43.010565 containerd[1552]: time="2025-09-11T00:21:43.007855836Z" level=info msg="Container 80cf8d5898b8d4e53e018e3fd56754580982053a0eca0736b475229c2e3ab102: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:43.035856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284939405.mount: Deactivated successfully. Sep 11 00:21:43.120127 containerd[1552]: time="2025-09-11T00:21:43.118850745Z" level=info msg="CreateContainer within sandbox \"10315239c3149f60c3c20a989dd37604a51c1b4dffab1cdc52fab6fe7c0fd0ca\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"80cf8d5898b8d4e53e018e3fd56754580982053a0eca0736b475229c2e3ab102\"" Sep 11 00:21:43.120477 containerd[1552]: time="2025-09-11T00:21:43.120284644Z" level=info msg="StartContainer for \"80cf8d5898b8d4e53e018e3fd56754580982053a0eca0736b475229c2e3ab102\"" Sep 11 00:21:43.127557 containerd[1552]: time="2025-09-11T00:21:43.127365390Z" level=info msg="connecting to shim 80cf8d5898b8d4e53e018e3fd56754580982053a0eca0736b475229c2e3ab102" address="unix:///run/containerd/s/2ff49b5e1c9401edc9ef369c9e5b259697afd472be1e42dd05f7d87fe680ab49" protocol=ttrpc version=3 Sep 11 00:21:43.187118 systemd[1]: Started cri-containerd-80cf8d5898b8d4e53e018e3fd56754580982053a0eca0736b475229c2e3ab102.scope - libcontainer container 80cf8d5898b8d4e53e018e3fd56754580982053a0eca0736b475229c2e3ab102. Sep 11 00:21:43.196672 sshd[4916]: Accepted publickey for core from 147.75.109.163 port 59446 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:21:43.201880 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:21:43.222489 systemd-logind[1523]: New session 8 of user core. Sep 11 00:21:43.241160 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:21:43.357056 containerd[1552]: time="2025-09-11T00:21:43.357005991Z" level=info msg="StartContainer for \"80cf8d5898b8d4e53e018e3fd56754580982053a0eca0736b475229c2e3ab102\" returns successfully" Sep 11 00:21:44.087969 sshd[4943]: Connection closed by 147.75.109.163 port 59446 Sep 11 00:21:44.088500 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Sep 11 00:21:44.098870 systemd[1]: sshd@8-137.184.47.128:22-147.75.109.163:59446.service: Deactivated successfully. Sep 11 00:21:44.104602 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:21:44.106166 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:21:44.109050 systemd-logind[1523]: Removed session 8. Sep 11 00:21:47.345366 containerd[1552]: time="2025-09-11T00:21:47.345154268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:47.349357 containerd[1552]: time="2025-09-11T00:21:47.348947254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 11 00:21:47.357597 containerd[1552]: time="2025-09-11T00:21:47.357079788Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:47.361565 containerd[1552]: time="2025-09-11T00:21:47.360235108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:47.361565 containerd[1552]: time="2025-09-11T00:21:47.361411982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.38462497s" Sep 11 00:21:47.361565 containerd[1552]: time="2025-09-11T00:21:47.361468619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 11 00:21:47.363957 containerd[1552]: time="2025-09-11T00:21:47.363891725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:21:47.460398 containerd[1552]: time="2025-09-11T00:21:47.460213070Z" level=info msg="CreateContainer within sandbox \"49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:21:47.489926 containerd[1552]: time="2025-09-11T00:21:47.489858192Z" level=info msg="Container e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:47.507983 containerd[1552]: time="2025-09-11T00:21:47.507922394Z" level=info msg="CreateContainer within sandbox \"49b2730b9a5ca7c832d2f2b2766adfcb111637561644986a1fedce1d58eecb40\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf\"" Sep 11 00:21:47.512617 containerd[1552]: time="2025-09-11T00:21:47.512555248Z" level=info msg="StartContainer for \"e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf\"" Sep 11 00:21:47.518914 containerd[1552]: time="2025-09-11T00:21:47.518829879Z" level=info msg="connecting to shim e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf" address="unix:///run/containerd/s/ef966534f5cf87793c68b29742d98b9ab48041f372764400e24b4dcb6c6c5332" protocol=ttrpc version=3 Sep 11 00:21:47.623143 systemd[1]: Started cri-containerd-e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf.scope - libcontainer container e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf. Sep 11 00:21:47.722278 containerd[1552]: time="2025-09-11T00:21:47.722202300Z" level=info msg="StartContainer for \"e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf\" returns successfully" Sep 11 00:21:48.381806 kubelet[2738]: I0911 00:21:48.381516 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f4d899447-h85lk" podStartSLOduration=6.649374173 podStartE2EDuration="18.381014916s" podCreationTimestamp="2025-09-11 00:21:30 +0000 UTC" firstStartedPulling="2025-09-11 00:21:31.244263761 +0000 UTC m=+44.970313725" lastFinishedPulling="2025-09-11 00:21:42.975904493 +0000 UTC m=+56.701954468" observedRunningTime="2025-09-11 00:21:44.388307181 +0000 UTC m=+58.114357171" watchObservedRunningTime="2025-09-11 00:21:48.381014916 +0000 UTC m=+62.107064912" Sep 11 00:21:48.384363 kubelet[2738]: I0911 00:21:48.384077 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bd84ccdd6-j58dq" podStartSLOduration=26.964356593 podStartE2EDuration="40.384059807s" podCreationTimestamp="2025-09-11 00:21:08 +0000 UTC" firstStartedPulling="2025-09-11 00:21:33.943801929 +0000 UTC m=+47.669851892" lastFinishedPulling="2025-09-11 00:21:47.363505118 +0000 UTC m=+61.089555106" observedRunningTime="2025-09-11 00:21:48.379848756 +0000 UTC m=+62.105898751" watchObservedRunningTime="2025-09-11 00:21:48.384059807 +0000 UTC m=+62.110109821" Sep 11 00:21:48.435394 containerd[1552]: time="2025-09-11T00:21:48.435089809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf\" id:\"44445936132a1937cd5605b06416696e3faeb53c68a27e0912d113cb3a4d55d6\" pid:5036 exited_at:{seconds:1757550108 nanos:433514814}" Sep 11 00:21:49.128402 systemd[1]: Started sshd@9-137.184.47.128:22-147.75.109.163:59452.service - OpenSSH per-connection server daemon (147.75.109.163:59452). Sep 11 00:21:49.307832 sshd[5046]: Accepted publickey for core from 147.75.109.163 port 59452 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:21:49.312038 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:21:49.324720 systemd-logind[1523]: New session 9 of user core. Sep 11 00:21:49.335309 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:21:50.037680 sshd[5048]: Connection closed by 147.75.109.163 port 59452 Sep 11 00:21:50.038198 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Sep 11 00:21:50.047689 systemd[1]: sshd@9-137.184.47.128:22-147.75.109.163:59452.service: Deactivated successfully. Sep 11 00:21:50.053858 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:21:50.058744 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:21:50.061371 systemd-logind[1523]: Removed session 9. Sep 11 00:21:51.500999 containerd[1552]: time="2025-09-11T00:21:51.500896717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:51.503767 containerd[1552]: time="2025-09-11T00:21:51.503712140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 11 00:21:51.507603 containerd[1552]: time="2025-09-11T00:21:51.507045202Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:51.511556 containerd[1552]: time="2025-09-11T00:21:51.511468538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:51.512448 containerd[1552]: time="2025-09-11T00:21:51.512396114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.147707401s" Sep 11 00:21:51.512647 containerd[1552]: time="2025-09-11T00:21:51.512456705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:21:51.515601 containerd[1552]: time="2025-09-11T00:21:51.515100652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:21:51.526988 containerd[1552]: time="2025-09-11T00:21:51.526924776Z" level=info msg="CreateContainer within sandbox \"4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:21:51.541222 containerd[1552]: time="2025-09-11T00:21:51.541146022Z" level=info msg="Container a26b4cf1e50c42a79dcd1bc18bf09770f5822988a8ece67875f1f409f26680da: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:51.578902 containerd[1552]: time="2025-09-11T00:21:51.578819412Z" level=info msg="CreateContainer within sandbox \"4958e8e09cd7426a66c5e33737ee8276143828e1247a4e286ebe1f45f0f7891d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a26b4cf1e50c42a79dcd1bc18bf09770f5822988a8ece67875f1f409f26680da\"" Sep 11 00:21:51.580918 containerd[1552]: time="2025-09-11T00:21:51.580871416Z" level=info msg="StartContainer for \"a26b4cf1e50c42a79dcd1bc18bf09770f5822988a8ece67875f1f409f26680da\"" Sep 11 00:21:51.583397 containerd[1552]: time="2025-09-11T00:21:51.583268497Z" level=info msg="connecting to shim a26b4cf1e50c42a79dcd1bc18bf09770f5822988a8ece67875f1f409f26680da" address="unix:///run/containerd/s/5b4243b489821df15094301b6c190dc34abb0b855a39e0b44070b3feb16c0ec9" protocol=ttrpc version=3 Sep 11 00:21:51.622205 systemd[1]: Started cri-containerd-a26b4cf1e50c42a79dcd1bc18bf09770f5822988a8ece67875f1f409f26680da.scope - libcontainer container a26b4cf1e50c42a79dcd1bc18bf09770f5822988a8ece67875f1f409f26680da. Sep 11 00:21:51.755913 containerd[1552]: time="2025-09-11T00:21:51.754672394Z" level=info msg="StartContainer for \"a26b4cf1e50c42a79dcd1bc18bf09770f5822988a8ece67875f1f409f26680da\" returns successfully" Sep 11 00:21:51.995431 containerd[1552]: time="2025-09-11T00:21:51.995337663Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:51.996779 containerd[1552]: time="2025-09-11T00:21:51.996724582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:21:52.000225 containerd[1552]: time="2025-09-11T00:21:52.000137648Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 483.85669ms" Sep 11 00:21:52.000225 containerd[1552]: time="2025-09-11T00:21:52.000220515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 11 00:21:52.006624 containerd[1552]: time="2025-09-11T00:21:52.006467534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:21:52.021341 containerd[1552]: time="2025-09-11T00:21:52.021276153Z" level=info msg="CreateContainer within sandbox \"698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:21:52.036114 containerd[1552]: time="2025-09-11T00:21:52.036045470Z" level=info msg="Container 50c7878d2493b5c213c7434e595065110734142b219546231bd967be16709571: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:52.107726 containerd[1552]: time="2025-09-11T00:21:52.107663556Z" level=info msg="CreateContainer within sandbox \"698134411da5a78d3bd961887da70d9ca8e5b1f148e9fe9eb5b50d67f1d595db\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50c7878d2493b5c213c7434e595065110734142b219546231bd967be16709571\"" Sep 11 00:21:52.109075 containerd[1552]: time="2025-09-11T00:21:52.109029897Z" level=info msg="StartContainer for \"50c7878d2493b5c213c7434e595065110734142b219546231bd967be16709571\"" Sep 11 00:21:52.111241 containerd[1552]: time="2025-09-11T00:21:52.111184628Z" level=info msg="connecting to shim 50c7878d2493b5c213c7434e595065110734142b219546231bd967be16709571" address="unix:///run/containerd/s/89023da76aca30091b8a6b031ddc96b7c26a13549cbb1583b953ebf1d009135e" protocol=ttrpc version=3 Sep 11 00:21:52.164899 systemd[1]: Started cri-containerd-50c7878d2493b5c213c7434e595065110734142b219546231bd967be16709571.scope - libcontainer container 50c7878d2493b5c213c7434e595065110734142b219546231bd967be16709571. Sep 11 00:21:52.293994 containerd[1552]: time="2025-09-11T00:21:52.293942113Z" level=info msg="StartContainer for \"50c7878d2493b5c213c7434e595065110734142b219546231bd967be16709571\" returns successfully" Sep 11 00:21:52.405685 kubelet[2738]: I0911 00:21:52.405564 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d5bf468dc-pklpb" podStartSLOduration=32.798833296 podStartE2EDuration="48.405185985s" podCreationTimestamp="2025-09-11 00:21:04 +0000 UTC" firstStartedPulling="2025-09-11 00:21:35.908141462 +0000 UTC m=+49.634191451" lastFinishedPulling="2025-09-11 00:21:51.514494118 +0000 UTC m=+65.240544140" observedRunningTime="2025-09-11 00:21:52.404370618 +0000 UTC m=+66.130420605" watchObservedRunningTime="2025-09-11 00:21:52.405185985 +0000 UTC m=+66.131235970" Sep 11 00:21:54.149822 containerd[1552]: time="2025-09-11T00:21:54.149724837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:54.151963 containerd[1552]: time="2025-09-11T00:21:54.151840339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 11 00:21:54.152954 containerd[1552]: time="2025-09-11T00:21:54.152914141Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:54.158494 containerd[1552]: time="2025-09-11T00:21:54.158435477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:54.160043 containerd[1552]: time="2025-09-11T00:21:54.159922558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.153395601s" Sep 11 00:21:54.160043 containerd[1552]: time="2025-09-11T00:21:54.159984566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 11 00:21:54.172778 containerd[1552]: time="2025-09-11T00:21:54.172719899Z" level=info msg="CreateContainer within sandbox \"7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:21:54.275571 containerd[1552]: time="2025-09-11T00:21:54.270801814Z" level=info msg="Container 99e257f49b4f4a4fb80d335e39864c654f1e26a73a59e0c72fd96e8ef78908c4: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:54.353560 containerd[1552]: time="2025-09-11T00:21:54.352278722Z" level=info msg="CreateContainer within sandbox \"7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"99e257f49b4f4a4fb80d335e39864c654f1e26a73a59e0c72fd96e8ef78908c4\"" Sep 11 00:21:54.361570 containerd[1552]: time="2025-09-11T00:21:54.359991717Z" level=info msg="StartContainer for \"99e257f49b4f4a4fb80d335e39864c654f1e26a73a59e0c72fd96e8ef78908c4\"" Sep 11 00:21:54.364969 containerd[1552]: time="2025-09-11T00:21:54.363162512Z" level=info msg="connecting to shim 99e257f49b4f4a4fb80d335e39864c654f1e26a73a59e0c72fd96e8ef78908c4" address="unix:///run/containerd/s/db253ef2464fa701f718439149e092c2f40fb7da2093737c717f30a5924c81fe" protocol=ttrpc version=3 Sep 11 00:21:54.449845 kubelet[2738]: I0911 00:21:54.448139 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:21:54.486778 systemd[1]: Started cri-containerd-99e257f49b4f4a4fb80d335e39864c654f1e26a73a59e0c72fd96e8ef78908c4.scope - libcontainer container 99e257f49b4f4a4fb80d335e39864c654f1e26a73a59e0c72fd96e8ef78908c4. Sep 11 00:21:54.532106 kubelet[2738]: I0911 00:21:54.531147 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d5bf468dc-mhkf5" podStartSLOduration=34.850694277 podStartE2EDuration="50.531120356s" podCreationTimestamp="2025-09-11 00:21:04 +0000 UTC" firstStartedPulling="2025-09-11 00:21:36.324585167 +0000 UTC m=+50.050635132" lastFinishedPulling="2025-09-11 00:21:52.005011227 +0000 UTC m=+65.731061211" observedRunningTime="2025-09-11 00:21:52.438105774 +0000 UTC m=+66.164155769" watchObservedRunningTime="2025-09-11 00:21:54.531120356 +0000 UTC m=+68.257170357" Sep 11 00:21:54.767967 containerd[1552]: time="2025-09-11T00:21:54.767764693Z" level=info msg="StartContainer for \"99e257f49b4f4a4fb80d335e39864c654f1e26a73a59e0c72fd96e8ef78908c4\" returns successfully" Sep 11 00:21:54.771198 containerd[1552]: time="2025-09-11T00:21:54.771093056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:21:55.057885 systemd[1]: Started sshd@10-137.184.47.128:22-147.75.109.163:57560.service - OpenSSH per-connection server daemon (147.75.109.163:57560). Sep 11 00:21:55.283233 sshd[5190]: Accepted publickey for core from 147.75.109.163 port 57560 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:21:55.286486 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:21:55.296884 systemd-logind[1523]: New session 10 of user core. Sep 11 00:21:55.305108 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:21:55.979238 sshd[5192]: Connection closed by 147.75.109.163 port 57560 Sep 11 00:21:55.980096 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Sep 11 00:21:56.004393 systemd[1]: sshd@10-137.184.47.128:22-147.75.109.163:57560.service: Deactivated successfully. Sep 11 00:21:56.012855 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:21:56.015161 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:21:56.023752 systemd[1]: Started sshd@11-137.184.47.128:22-147.75.109.163:57570.service - OpenSSH per-connection server daemon (147.75.109.163:57570). Sep 11 00:21:56.026506 systemd-logind[1523]: Removed session 10. Sep 11 00:21:56.119506 sshd[5205]: Accepted publickey for core from 147.75.109.163 port 57570 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:21:56.122288 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:21:56.132047 systemd-logind[1523]: New session 11 of user core. Sep 11 00:21:56.138853 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:21:56.545741 sshd[5207]: Connection closed by 147.75.109.163 port 57570 Sep 11 00:21:56.546406 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Sep 11 00:21:56.575943 systemd[1]: sshd@11-137.184.47.128:22-147.75.109.163:57570.service: Deactivated successfully. Sep 11 00:21:56.583191 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:21:56.588455 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:21:56.602009 systemd[1]: Started sshd@12-137.184.47.128:22-147.75.109.163:57584.service - OpenSSH per-connection server daemon (147.75.109.163:57584). Sep 11 00:21:56.610656 systemd-logind[1523]: Removed session 11. Sep 11 00:21:56.781192 sshd[5220]: Accepted publickey for core from 147.75.109.163 port 57584 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:21:56.787507 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:21:56.814689 systemd-logind[1523]: New session 12 of user core. Sep 11 00:21:56.818191 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:21:57.169802 sshd[5222]: Connection closed by 147.75.109.163 port 57584 Sep 11 00:21:57.170391 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Sep 11 00:21:57.183008 systemd[1]: sshd@12-137.184.47.128:22-147.75.109.163:57584.service: Deactivated successfully. Sep 11 00:21:57.193192 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:21:57.199441 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:21:57.203511 systemd-logind[1523]: Removed session 12. Sep 11 00:21:57.223171 containerd[1552]: time="2025-09-11T00:21:57.223090920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:57.225346 containerd[1552]: time="2025-09-11T00:21:57.225280543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 11 00:21:57.226092 containerd[1552]: time="2025-09-11T00:21:57.226042423Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:57.236443 containerd[1552]: time="2025-09-11T00:21:57.235746965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:21:57.236901 containerd[1552]: time="2025-09-11T00:21:57.236832552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.465243485s" Sep 11 00:21:57.236901 containerd[1552]: time="2025-09-11T00:21:57.236873431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 11 00:21:57.270966 containerd[1552]: time="2025-09-11T00:21:57.270899113Z" level=info msg="CreateContainer within sandbox \"7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:21:57.292593 containerd[1552]: time="2025-09-11T00:21:57.291846700Z" level=info msg="Container 0cc5a6ad9a4f996826749fa94eff379a680603c133deb169edc51a9a75b13bda: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:21:57.314318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198318167.mount: Deactivated successfully. Sep 11 00:21:57.321716 containerd[1552]: time="2025-09-11T00:21:57.321163824Z" level=info msg="CreateContainer within sandbox \"7179c000d7a58c89b5a73443cc28448412fb8cfddd85e0c9462d3e162ffb4c0e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0cc5a6ad9a4f996826749fa94eff379a680603c133deb169edc51a9a75b13bda\"" Sep 11 00:21:57.325636 containerd[1552]: time="2025-09-11T00:21:57.325118339Z" level=info msg="StartContainer for \"0cc5a6ad9a4f996826749fa94eff379a680603c133deb169edc51a9a75b13bda\"" Sep 11 00:21:57.339916 containerd[1552]: time="2025-09-11T00:21:57.339516417Z" level=info msg="connecting to shim 0cc5a6ad9a4f996826749fa94eff379a680603c133deb169edc51a9a75b13bda" address="unix:///run/containerd/s/db253ef2464fa701f718439149e092c2f40fb7da2093737c717f30a5924c81fe" protocol=ttrpc version=3 Sep 11 00:21:57.423922 systemd[1]: Started cri-containerd-0cc5a6ad9a4f996826749fa94eff379a680603c133deb169edc51a9a75b13bda.scope - libcontainer container 0cc5a6ad9a4f996826749fa94eff379a680603c133deb169edc51a9a75b13bda. Sep 11 00:21:57.549306 containerd[1552]: time="2025-09-11T00:21:57.548865590Z" level=info msg="StartContainer for \"0cc5a6ad9a4f996826749fa94eff379a680603c133deb169edc51a9a75b13bda\" returns successfully" Sep 11 00:21:58.190301 kubelet[2738]: I0911 00:21:58.183650 2738 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:21:58.211272 kubelet[2738]: I0911 00:21:58.211205 2738 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:21:58.512695 kubelet[2738]: I0911 00:21:58.512379 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mqnt4" podStartSLOduration=30.566459516 podStartE2EDuration="50.512354445s" podCreationTimestamp="2025-09-11 00:21:08 +0000 UTC" firstStartedPulling="2025-09-11 00:21:37.294377684 +0000 UTC m=+51.020427643" lastFinishedPulling="2025-09-11 00:21:57.240272602 +0000 UTC m=+70.966322572" observedRunningTime="2025-09-11 00:21:58.511071935 +0000 UTC m=+72.237121924" watchObservedRunningTime="2025-09-11 00:21:58.512354445 +0000 UTC m=+72.238404473" Sep 11 00:22:01.374825 containerd[1552]: time="2025-09-11T00:22:01.374727300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e\" id:\"5d30dbe58caf6073d6088c6ddda55d3b82f6b185048038f0b828e139587b2817\" pid:5286 exit_status:1 exited_at:{seconds:1757550121 nanos:371820321}" Sep 11 00:22:02.195680 systemd[1]: Started sshd@13-137.184.47.128:22-147.75.109.163:34456.service - OpenSSH per-connection server daemon (147.75.109.163:34456). Sep 11 00:22:02.442882 sshd[5298]: Accepted publickey for core from 147.75.109.163 port 34456 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:02.446469 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:02.456764 systemd-logind[1523]: New session 13 of user core. Sep 11 00:22:02.469051 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:22:03.294274 sshd[5300]: Connection closed by 147.75.109.163 port 34456 Sep 11 00:22:03.296238 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:03.302734 systemd[1]: sshd@13-137.184.47.128:22-147.75.109.163:34456.service: Deactivated successfully. Sep 11 00:22:03.305833 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:22:03.308355 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:22:03.310940 systemd-logind[1523]: Removed session 13. Sep 11 00:22:08.314036 systemd[1]: Started sshd@14-137.184.47.128:22-147.75.109.163:34468.service - OpenSSH per-connection server daemon (147.75.109.163:34468). Sep 11 00:22:08.407362 sshd[5315]: Accepted publickey for core from 147.75.109.163 port 34468 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:08.411605 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:08.423926 systemd-logind[1523]: New session 14 of user core. Sep 11 00:22:08.429944 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:22:08.763733 sshd[5317]: Connection closed by 147.75.109.163 port 34468 Sep 11 00:22:08.765246 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:08.771943 systemd[1]: sshd@14-137.184.47.128:22-147.75.109.163:34468.service: Deactivated successfully. Sep 11 00:22:08.777847 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:22:08.782727 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:22:08.785456 systemd-logind[1523]: Removed session 14. Sep 11 00:22:12.596209 containerd[1552]: time="2025-09-11T00:22:12.596134080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3\" id:\"8c46ace196a05ac87b402200f67cceadef414f3fb6ce92567f9d3946c50e71e1\" pid:5340 exited_at:{seconds:1757550132 nanos:593096255}" Sep 11 00:22:13.585379 kubelet[2738]: E0911 00:22:13.585308 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:22:13.779317 systemd[1]: Started sshd@15-137.184.47.128:22-147.75.109.163:59160.service - OpenSSH per-connection server daemon (147.75.109.163:59160). Sep 11 00:22:13.930521 containerd[1552]: time="2025-09-11T00:22:13.930382975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15e293ee608435bcd601886e25ac5efb672b4d5c36086aa8e6ef97af00bfede3\" id:\"78d18d09296fd32b95ac123c11964947c37768423e30dc64c004fee0c9ec8d32\" pid:5373 exited_at:{seconds:1757550133 nanos:929817261}" Sep 11 00:22:13.999004 sshd[5371]: Accepted publickey for core from 147.75.109.163 port 59160 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:14.001829 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:14.023291 systemd-logind[1523]: New session 15 of user core. Sep 11 00:22:14.029010 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:22:14.727026 sshd[5385]: Connection closed by 147.75.109.163 port 59160 Sep 11 00:22:14.729860 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:14.736183 systemd[1]: sshd@15-137.184.47.128:22-147.75.109.163:59160.service: Deactivated successfully. Sep 11 00:22:14.739436 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:22:14.741383 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:22:14.745586 systemd-logind[1523]: Removed session 15. Sep 11 00:22:16.332362 containerd[1552]: time="2025-09-11T00:22:16.332291717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf\" id:\"411f996887361689c9ab8a5c27d5fb68b2b846fbef1bd9a4281f1aa5559189c2\" pid:5409 exited_at:{seconds:1757550136 nanos:331173828}" Sep 11 00:22:17.570997 kubelet[2738]: E0911 00:22:17.570925 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:22:18.555099 containerd[1552]: time="2025-09-11T00:22:18.555015217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8748d63e72df5f8e6cb964ffa6a6d0ece085be8b41f9a9fb37f3952c83f7faf\" id:\"2e47e24654d9b8c2cb85f77fb32f3abea4770196b16409a77640f85b40f2e9fa\" pid:5434 exited_at:{seconds:1757550138 nanos:554463686}" Sep 11 00:22:19.743560 systemd[1]: Started sshd@16-137.184.47.128:22-147.75.109.163:59174.service - OpenSSH per-connection server daemon (147.75.109.163:59174). Sep 11 00:22:19.854600 sshd[5445]: Accepted publickey for core from 147.75.109.163 port 59174 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:19.856308 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:19.862904 systemd-logind[1523]: New session 16 of user core. Sep 11 00:22:19.870998 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:22:20.147606 sshd[5447]: Connection closed by 147.75.109.163 port 59174 Sep 11 00:22:20.148581 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:20.163120 systemd[1]: sshd@16-137.184.47.128:22-147.75.109.163:59174.service: Deactivated successfully. Sep 11 00:22:20.167522 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:22:20.169095 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:22:20.177988 systemd[1]: Started sshd@17-137.184.47.128:22-147.75.109.163:49562.service - OpenSSH per-connection server daemon (147.75.109.163:49562). Sep 11 00:22:20.178798 systemd-logind[1523]: Removed session 16. Sep 11 00:22:20.262264 sshd[5459]: Accepted publickey for core from 147.75.109.163 port 49562 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:20.264379 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:20.270705 systemd-logind[1523]: New session 17 of user core. Sep 11 00:22:20.278789 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:22:20.569446 kubelet[2738]: E0911 00:22:20.568226 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:22:20.580756 kubelet[2738]: E0911 00:22:20.579675 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:22:20.628582 sshd[5461]: Connection closed by 147.75.109.163 port 49562 Sep 11 00:22:20.630335 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:20.646898 systemd[1]: Started sshd@18-137.184.47.128:22-147.75.109.163:49564.service - OpenSSH per-connection server daemon (147.75.109.163:49564). Sep 11 00:22:20.647908 systemd[1]: sshd@17-137.184.47.128:22-147.75.109.163:49562.service: Deactivated successfully. Sep 11 00:22:20.653221 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:22:20.658786 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:22:20.667924 systemd-logind[1523]: Removed session 17. Sep 11 00:22:20.762619 sshd[5468]: Accepted publickey for core from 147.75.109.163 port 49564 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:20.765105 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:20.773215 systemd-logind[1523]: New session 18 of user core. Sep 11 00:22:20.778838 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:22:21.602604 sshd[5473]: Connection closed by 147.75.109.163 port 49564 Sep 11 00:22:21.606221 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:21.625853 systemd[1]: sshd@18-137.184.47.128:22-147.75.109.163:49564.service: Deactivated successfully. Sep 11 00:22:21.630361 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:22:21.634233 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:22:21.643793 systemd[1]: Started sshd@19-137.184.47.128:22-147.75.109.163:49580.service - OpenSSH per-connection server daemon (147.75.109.163:49580). Sep 11 00:22:21.650804 systemd-logind[1523]: Removed session 18. Sep 11 00:22:21.766910 sshd[5487]: Accepted publickey for core from 147.75.109.163 port 49580 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:21.769488 sshd-session[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:21.779647 systemd-logind[1523]: New session 19 of user core. Sep 11 00:22:21.787114 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:22:22.662112 sshd[5493]: Connection closed by 147.75.109.163 port 49580 Sep 11 00:22:22.665770 sshd-session[5487]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:22.682569 systemd[1]: sshd@19-137.184.47.128:22-147.75.109.163:49580.service: Deactivated successfully. Sep 11 00:22:22.692073 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:22:22.700946 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:22:22.709334 systemd[1]: Started sshd@20-137.184.47.128:22-147.75.109.163:49584.service - OpenSSH per-connection server daemon (147.75.109.163:49584). Sep 11 00:22:22.727241 systemd-logind[1523]: Removed session 19. Sep 11 00:22:22.868901 sshd[5503]: Accepted publickey for core from 147.75.109.163 port 49584 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:22.870920 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:22.879620 systemd-logind[1523]: New session 20 of user core. Sep 11 00:22:22.886867 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 00:22:23.099422 sshd[5507]: Connection closed by 147.75.109.163 port 49584 Sep 11 00:22:23.100322 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:23.106719 systemd[1]: sshd@20-137.184.47.128:22-147.75.109.163:49584.service: Deactivated successfully. Sep 11 00:22:23.110268 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 00:22:23.112021 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. Sep 11 00:22:23.114058 systemd-logind[1523]: Removed session 20. Sep 11 00:22:28.119620 systemd[1]: Started sshd@21-137.184.47.128:22-147.75.109.163:49586.service - OpenSSH per-connection server daemon (147.75.109.163:49586). Sep 11 00:22:28.199651 sshd[5521]: Accepted publickey for core from 147.75.109.163 port 49586 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:28.201830 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:28.210209 systemd-logind[1523]: New session 21 of user core. Sep 11 00:22:28.216878 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 00:22:28.454969 sshd[5523]: Connection closed by 147.75.109.163 port 49586 Sep 11 00:22:28.454720 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:28.461449 systemd[1]: sshd@21-137.184.47.128:22-147.75.109.163:49586.service: Deactivated successfully. Sep 11 00:22:28.464359 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 00:22:28.465485 systemd-logind[1523]: Session 21 logged out. Waiting for processes to exit. Sep 11 00:22:28.470331 systemd-logind[1523]: Removed session 21. Sep 11 00:22:29.565356 kubelet[2738]: E0911 00:22:29.565282 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 11 00:22:31.274029 containerd[1552]: time="2025-09-11T00:22:31.273882775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cd8fccd60d4224de94cdafe112a1ab7db1767380c9e61471857ae82d09c8f24e\" id:\"02f4580e5f35f9c2a4ee45f838c01b189e1d76ca62e8531cf8f2959eb51007cb\" pid:5545 exited_at:{seconds:1757550151 nanos:273390436}" Sep 11 00:22:33.475756 systemd[1]: Started sshd@22-137.184.47.128:22-147.75.109.163:50134.service - OpenSSH per-connection server daemon (147.75.109.163:50134). Sep 11 00:22:33.629641 sshd[5556]: Accepted publickey for core from 147.75.109.163 port 50134 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:33.631356 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:33.638025 systemd-logind[1523]: New session 22 of user core. Sep 11 00:22:33.645836 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 00:22:34.210350 sshd[5558]: Connection closed by 147.75.109.163 port 50134 Sep 11 00:22:34.213784 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:34.221262 systemd[1]: sshd@22-137.184.47.128:22-147.75.109.163:50134.service: Deactivated successfully. Sep 11 00:22:34.224929 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 00:22:34.226506 systemd-logind[1523]: Session 22 logged out. Waiting for processes to exit. Sep 11 00:22:34.230167 systemd-logind[1523]: Removed session 22. Sep 11 00:22:39.238857 systemd[1]: Started sshd@23-137.184.47.128:22-147.75.109.163:50144.service - OpenSSH per-connection server daemon (147.75.109.163:50144). Sep 11 00:22:39.366337 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 50144 ssh2: RSA SHA256:75v2InfL/m+9WH/isPfMfMWFJ5o78V3wTlaMzBZardQ Sep 11 00:22:39.370142 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:22:39.380631 systemd-logind[1523]: New session 23 of user core. Sep 11 00:22:39.384891 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 00:22:39.920008 sshd[5571]: Connection closed by 147.75.109.163 port 50144 Sep 11 00:22:39.920835 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Sep 11 00:22:39.930969 systemd[1]: sshd@23-137.184.47.128:22-147.75.109.163:50144.service: Deactivated successfully. Sep 11 00:22:39.938464 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 00:22:39.943201 systemd-logind[1523]: Session 23 logged out. Waiting for processes to exit. Sep 11 00:22:39.948025 systemd-logind[1523]: Removed session 23. Sep 11 00:22:41.566295 kubelet[2738]: E0911 00:22:41.565739 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"