Sep 9 05:35:09.037629 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:35:09.037678 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:09.037705 kernel: BIOS-provided physical RAM map: Sep 9 05:35:09.037715 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 05:35:09.037726 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 05:35:09.037736 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 05:35:09.037749 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 9 05:35:09.037766 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 9 05:35:09.037780 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:35:09.037789 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 05:35:09.037799 kernel: NX (Execute Disable) protection: active Sep 9 05:35:09.037809 kernel: APIC: Static calls initialized Sep 9 05:35:09.037820 kernel: SMBIOS 2.8 present. Sep 9 05:35:09.037831 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 9 05:35:09.037848 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:35:09.037859 kernel: Hypervisor detected: KVM Sep 9 05:35:09.037875 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:35:09.037886 kernel: kvm-clock: using sched offset of 5455397851 cycles Sep 9 05:35:09.037899 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:35:09.037910 kernel: tsc: Detected 1999.997 MHz processor Sep 9 05:35:09.037920 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:35:09.037940 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:35:09.037951 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 9 05:35:09.037968 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 05:35:09.037980 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:35:09.037992 kernel: ACPI: Early table checksum verification disabled Sep 9 05:35:09.038004 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 9 05:35:09.038017 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:09.038028 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:09.038040 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:09.038050 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 9 05:35:09.038062 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:09.038079 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:09.038091 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:09.038102 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:35:09.038114 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 9 05:35:09.038126 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 9 05:35:09.038138 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 9 05:35:09.038150 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 9 05:35:09.038161 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 9 05:35:09.040225 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 9 05:35:09.040272 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 9 05:35:09.040286 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 05:35:09.040299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 05:35:09.040312 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Sep 9 05:35:09.040324 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Sep 9 05:35:09.040345 kernel: Zone ranges: Sep 9 05:35:09.040357 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:35:09.040369 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 9 05:35:09.040380 kernel: Normal empty Sep 9 05:35:09.040392 kernel: Device empty Sep 9 05:35:09.040404 kernel: Movable zone start for each node Sep 9 05:35:09.040416 kernel: Early memory node ranges Sep 9 05:35:09.040428 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 05:35:09.040441 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 9 05:35:09.040459 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 9 05:35:09.040471 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:35:09.040493 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 05:35:09.040506 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 9 05:35:09.040520 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 05:35:09.040534 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:35:09.040559 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:35:09.040572 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 05:35:09.040589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:35:09.040608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:35:09.040619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:35:09.040633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:35:09.040641 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:35:09.040649 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 05:35:09.040658 kernel: TSC deadline timer available Sep 9 05:35:09.040666 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:35:09.040674 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:35:09.040682 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:35:09.040693 kernel: CPU topo: Max. threads per core: 1 Sep 9 05:35:09.040701 kernel: CPU topo: Num. cores per package: 2 Sep 9 05:35:09.040709 kernel: CPU topo: Num. threads per package: 2 Sep 9 05:35:09.040717 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 9 05:35:09.040726 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 05:35:09.040734 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 9 05:35:09.040742 kernel: Booting paravirtualized kernel on KVM Sep 9 05:35:09.040750 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:35:09.040759 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 9 05:35:09.040767 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 9 05:35:09.040778 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 9 05:35:09.040786 kernel: pcpu-alloc: [0] 0 1 Sep 9 05:35:09.040794 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 9 05:35:09.040805 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:09.040814 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:35:09.040822 kernel: random: crng init done Sep 9 05:35:09.040830 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:35:09.040843 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 05:35:09.040858 kernel: Fallback order for Node 0: 0 Sep 9 05:35:09.040872 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Sep 9 05:35:09.040885 kernel: Policy zone: DMA32 Sep 9 05:35:09.040897 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:35:09.040909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 9 05:35:09.040921 kernel: Kernel/User page tables isolation: enabled Sep 9 05:35:09.040932 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:35:09.040946 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:35:09.040959 kernel: Dynamic Preempt: voluntary Sep 9 05:35:09.040976 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:35:09.040987 kernel: rcu: RCU event tracing is enabled. Sep 9 05:35:09.040997 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 9 05:35:09.041007 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:35:09.041015 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:35:09.041023 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:35:09.041031 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:35:09.041039 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 9 05:35:09.041048 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:09.041070 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:09.041092 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 9 05:35:09.041104 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 9 05:35:09.041117 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:35:09.041129 kernel: Console: colour VGA+ 80x25 Sep 9 05:35:09.041142 kernel: printk: legacy console [tty0] enabled Sep 9 05:35:09.041154 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:35:09.041168 kernel: ACPI: Core revision 20240827 Sep 9 05:35:09.041203 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 05:35:09.041225 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:35:09.041234 kernel: x2apic enabled Sep 9 05:35:09.041243 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:35:09.041275 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 05:35:09.041294 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Sep 9 05:35:09.041309 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Sep 9 05:35:09.041323 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 9 05:35:09.041338 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 9 05:35:09.041349 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:35:09.041361 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 05:35:09.041369 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:35:09.041381 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 9 05:35:09.041397 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 05:35:09.041412 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 05:35:09.041425 kernel: MDS: Mitigation: Clear CPU buffers Sep 9 05:35:09.041438 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 9 05:35:09.041454 kernel: active return thunk: its_return_thunk Sep 9 05:35:09.041467 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 05:35:09.041480 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:35:09.041494 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:35:09.041518 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:35:09.041600 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:35:09.041616 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 9 05:35:09.041631 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:35:09.041643 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:35:09.041656 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:35:09.041666 kernel: landlock: Up and running. Sep 9 05:35:09.041674 kernel: SELinux: Initializing. Sep 9 05:35:09.041684 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 05:35:09.041693 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 05:35:09.041703 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 9 05:35:09.041711 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 9 05:35:09.041720 kernel: signal: max sigframe size: 1776 Sep 9 05:35:09.041729 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:35:09.041742 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:35:09.041751 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:35:09.041760 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 05:35:09.041769 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:35:09.041784 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:35:09.041793 kernel: .... node #0, CPUs: #1 Sep 9 05:35:09.041802 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 05:35:09.041811 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Sep 9 05:35:09.041821 kernel: Memory: 1966912K/2096612K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 125144K reserved, 0K cma-reserved) Sep 9 05:35:09.041833 kernel: devtmpfs: initialized Sep 9 05:35:09.041842 kernel: x86/mm: Memory block size: 128MB Sep 9 05:35:09.041851 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:35:09.041860 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 9 05:35:09.041869 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:35:09.041878 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:35:09.041886 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:35:09.041895 kernel: audit: type=2000 audit(1757396104.791:1): state=initialized audit_enabled=0 res=1 Sep 9 05:35:09.041905 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:35:09.041916 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:35:09.041925 kernel: cpuidle: using governor menu Sep 9 05:35:09.041933 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:35:09.041943 kernel: dca service started, version 1.12.1 Sep 9 05:35:09.041952 kernel: PCI: Using configuration type 1 for base access Sep 9 05:35:09.041961 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:35:09.041970 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:35:09.041984 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:35:09.041996 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:35:09.042013 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:35:09.042026 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:35:09.042039 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:35:09.042055 kernel: ACPI: Interpreter enabled Sep 9 05:35:09.042067 kernel: ACPI: PM: (supports S0 S5) Sep 9 05:35:09.042076 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:35:09.042085 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:35:09.042094 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 05:35:09.042102 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 9 05:35:09.042114 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:35:09.044519 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:35:09.044674 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 9 05:35:09.044787 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 9 05:35:09.044799 kernel: acpiphp: Slot [3] registered Sep 9 05:35:09.044818 kernel: acpiphp: Slot [4] registered Sep 9 05:35:09.044841 kernel: acpiphp: Slot [5] registered Sep 9 05:35:09.044862 kernel: acpiphp: Slot [6] registered Sep 9 05:35:09.044872 kernel: acpiphp: Slot [7] registered Sep 9 05:35:09.044880 kernel: acpiphp: Slot [8] registered Sep 9 05:35:09.044889 kernel: acpiphp: Slot [9] registered Sep 9 05:35:09.044897 kernel: acpiphp: Slot [10] registered Sep 9 05:35:09.044906 kernel: acpiphp: Slot [11] registered Sep 9 05:35:09.044914 kernel: acpiphp: Slot [12] registered Sep 9 05:35:09.044923 kernel: acpiphp: Slot [13] registered Sep 9 05:35:09.044931 kernel: acpiphp: Slot [14] registered Sep 9 05:35:09.044940 kernel: acpiphp: Slot [15] registered Sep 9 05:35:09.044951 kernel: acpiphp: Slot [16] registered Sep 9 05:35:09.044959 kernel: acpiphp: Slot [17] registered Sep 9 05:35:09.044968 kernel: acpiphp: Slot [18] registered Sep 9 05:35:09.044976 kernel: acpiphp: Slot [19] registered Sep 9 05:35:09.044984 kernel: acpiphp: Slot [20] registered Sep 9 05:35:09.044992 kernel: acpiphp: Slot [21] registered Sep 9 05:35:09.045000 kernel: acpiphp: Slot [22] registered Sep 9 05:35:09.045009 kernel: acpiphp: Slot [23] registered Sep 9 05:35:09.045029 kernel: acpiphp: Slot [24] registered Sep 9 05:35:09.045044 kernel: acpiphp: Slot [25] registered Sep 9 05:35:09.045057 kernel: acpiphp: Slot [26] registered Sep 9 05:35:09.045070 kernel: acpiphp: Slot [27] registered Sep 9 05:35:09.045083 kernel: acpiphp: Slot [28] registered Sep 9 05:35:09.045097 kernel: acpiphp: Slot [29] registered Sep 9 05:35:09.045112 kernel: acpiphp: Slot [30] registered Sep 9 05:35:09.045122 kernel: acpiphp: Slot [31] registered Sep 9 05:35:09.045130 kernel: PCI host bridge to bus 0000:00 Sep 9 05:35:09.045326 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:35:09.045432 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:35:09.045547 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:35:09.045671 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 9 05:35:09.045792 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 9 05:35:09.045878 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:35:09.046100 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:35:09.048400 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:35:09.048613 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Sep 9 05:35:09.048743 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Sep 9 05:35:09.048862 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Sep 9 05:35:09.048961 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Sep 9 05:35:09.049058 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Sep 9 05:35:09.049176 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Sep 9 05:35:09.049414 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Sep 9 05:35:09.049523 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Sep 9 05:35:09.049686 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Sep 9 05:35:09.049814 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 9 05:35:09.049907 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 9 05:35:09.050044 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Sep 9 05:35:09.050170 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Sep 9 05:35:09.052373 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Sep 9 05:35:09.052485 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Sep 9 05:35:09.052577 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Sep 9 05:35:09.052687 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 05:35:09.052800 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:35:09.052894 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Sep 9 05:35:09.052994 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Sep 9 05:35:09.053084 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Sep 9 05:35:09.053228 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:35:09.053346 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Sep 9 05:35:09.053438 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Sep 9 05:35:09.053551 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 9 05:35:09.053683 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:35:09.053817 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Sep 9 05:35:09.053911 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Sep 9 05:35:09.054047 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 9 05:35:09.054150 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:35:09.056326 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Sep 9 05:35:09.056440 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Sep 9 05:35:09.056578 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Sep 9 05:35:09.056811 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:35:09.056996 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Sep 9 05:35:09.057130 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Sep 9 05:35:09.057315 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Sep 9 05:35:09.057436 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:35:09.057560 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Sep 9 05:35:09.057660 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 9 05:35:09.057675 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:35:09.057684 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:35:09.057693 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:35:09.057702 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:35:09.057711 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 9 05:35:09.057719 kernel: iommu: Default domain type: Translated Sep 9 05:35:09.057728 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:35:09.057739 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:35:09.057748 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:35:09.057760 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 05:35:09.057778 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 9 05:35:09.057910 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 9 05:35:09.058003 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 9 05:35:09.058094 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 05:35:09.058105 kernel: vgaarb: loaded Sep 9 05:35:09.058114 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 05:35:09.058126 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 05:35:09.058135 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:35:09.058143 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:35:09.058152 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:35:09.058161 kernel: pnp: PnP ACPI init Sep 9 05:35:09.058170 kernel: pnp: PnP ACPI: found 4 devices Sep 9 05:35:09.060234 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:35:09.060247 kernel: NET: Registered PF_INET protocol family Sep 9 05:35:09.060257 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:35:09.060274 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 05:35:09.060283 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:35:09.060293 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 05:35:09.060304 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 05:35:09.060313 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 05:35:09.060322 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 05:35:09.060338 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 05:35:09.060347 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:35:09.060356 kernel: NET: Registered PF_XDP protocol family Sep 9 05:35:09.060508 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:35:09.060634 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:35:09.060794 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:35:09.060933 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 9 05:35:09.061025 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 9 05:35:09.061129 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 9 05:35:09.061255 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 9 05:35:09.061269 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 9 05:35:09.061373 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 29405 usecs Sep 9 05:35:09.061386 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:35:09.061397 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 05:35:09.061407 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Sep 9 05:35:09.061416 kernel: Initialise system trusted keyrings Sep 9 05:35:09.061425 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 05:35:09.061434 kernel: Key type asymmetric registered Sep 9 05:35:09.061442 kernel: Asymmetric key parser 'x509' registered Sep 9 05:35:09.061454 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:35:09.061465 kernel: io scheduler mq-deadline registered Sep 9 05:35:09.061481 kernel: io scheduler kyber registered Sep 9 05:35:09.061494 kernel: io scheduler bfq registered Sep 9 05:35:09.061506 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:35:09.061518 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 9 05:35:09.061544 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 9 05:35:09.061558 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 9 05:35:09.061571 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:35:09.061583 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:35:09.061600 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:35:09.061613 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:35:09.061625 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:35:09.061638 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 05:35:09.061861 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 9 05:35:09.061967 kernel: rtc_cmos 00:03: registered as rtc0 Sep 9 05:35:09.062063 kernel: rtc_cmos 00:03: setting system clock to 2025-09-09T05:35:08 UTC (1757396108) Sep 9 05:35:09.064277 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 9 05:35:09.064318 kernel: intel_pstate: CPU model not supported Sep 9 05:35:09.064333 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:35:09.064346 kernel: Segment Routing with IPv6 Sep 9 05:35:09.064360 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:35:09.064374 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:35:09.064388 kernel: Key type dns_resolver registered Sep 9 05:35:09.064402 kernel: IPI shorthand broadcast: enabled Sep 9 05:35:09.064416 kernel: sched_clock: Marking stable (4091010914, 165349899)->(4378498564, -122137751) Sep 9 05:35:09.064439 kernel: registered taskstats version 1 Sep 9 05:35:09.064461 kernel: Loading compiled-in X.509 certificates Sep 9 05:35:09.064474 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:35:09.064487 kernel: Demotion targets for Node 0: null Sep 9 05:35:09.064500 kernel: Key type .fscrypt registered Sep 9 05:35:09.064513 kernel: Key type fscrypt-provisioning registered Sep 9 05:35:09.064668 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:35:09.064687 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:35:09.064701 kernel: ima: No architecture policies found Sep 9 05:35:09.064718 kernel: clk: Disabling unused clocks Sep 9 05:35:09.064731 kernel: Warning: unable to open an initial console. Sep 9 05:35:09.064745 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:35:09.064759 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:35:09.064772 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:35:09.064785 kernel: Run /init as init process Sep 9 05:35:09.064799 kernel: with arguments: Sep 9 05:35:09.064814 kernel: /init Sep 9 05:35:09.064827 kernel: with environment: Sep 9 05:35:09.064844 kernel: HOME=/ Sep 9 05:35:09.064858 kernel: TERM=linux Sep 9 05:35:09.064871 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:35:09.064887 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:35:09.064907 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:35:09.064922 systemd[1]: Detected virtualization kvm. Sep 9 05:35:09.064935 systemd[1]: Detected architecture x86-64. Sep 9 05:35:09.064961 systemd[1]: Running in initrd. Sep 9 05:35:09.064976 systemd[1]: No hostname configured, using default hostname. Sep 9 05:35:09.064992 systemd[1]: Hostname set to . Sep 9 05:35:09.065008 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:35:09.065035 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:35:09.065051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:09.065067 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:09.065084 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:35:09.065103 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:35:09.065119 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:35:09.065140 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:35:09.065159 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:35:09.065178 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:35:09.065213 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:09.065229 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:09.065246 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:35:09.065262 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:35:09.065286 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:35:09.065302 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:35:09.065318 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:35:09.065334 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:35:09.065355 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:35:09.065371 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:35:09.065387 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:09.065410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:09.065427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:09.065443 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:35:09.065466 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:35:09.065482 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:35:09.065501 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:35:09.065518 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:35:09.065586 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:35:09.065603 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:35:09.065619 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:35:09.065635 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:09.065649 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:35:09.065669 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:09.065685 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:35:09.065700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:35:09.065808 systemd-journald[212]: Collecting audit messages is disabled. Sep 9 05:35:09.065850 systemd-journald[212]: Journal started Sep 9 05:35:09.065883 systemd-journald[212]: Runtime Journal (/run/log/journal/ec1e1fd7207448b1bdefac00b423ccd0) is 4.9M, max 39.5M, 34.6M free. Sep 9 05:35:09.040276 systemd-modules-load[213]: Inserted module 'overlay' Sep 9 05:35:09.104104 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:35:09.104144 kernel: Bridge firewalling registered Sep 9 05:35:09.104161 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:35:09.087519 systemd-modules-load[213]: Inserted module 'br_netfilter' Sep 9 05:35:09.107542 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:09.109072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:09.116451 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:35:09.121178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:35:09.126461 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:35:09.128914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:35:09.143961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:35:09.160620 systemd-tmpfiles[230]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:35:09.165368 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:09.172245 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:09.178477 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:35:09.181256 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:35:09.182561 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:09.188388 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:35:09.233695 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:35:09.248912 systemd-resolved[247]: Positive Trust Anchors: Sep 9 05:35:09.248933 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:35:09.248970 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:35:09.254954 systemd-resolved[247]: Defaulting to hostname 'linux'. Sep 9 05:35:09.257564 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:35:09.258330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:09.388248 kernel: SCSI subsystem initialized Sep 9 05:35:09.404289 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:35:09.423320 kernel: iscsi: registered transport (tcp) Sep 9 05:35:09.454328 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:35:09.454418 kernel: QLogic iSCSI HBA Driver Sep 9 05:35:09.488359 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:35:09.528411 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:09.533949 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:35:09.611687 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:35:09.615377 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:35:09.686270 kernel: raid6: avx2x4 gen() 16634 MB/s Sep 9 05:35:09.704308 kernel: raid6: avx2x2 gen() 15714 MB/s Sep 9 05:35:09.722521 kernel: raid6: avx2x1 gen() 10502 MB/s Sep 9 05:35:09.722610 kernel: raid6: using algorithm avx2x4 gen() 16634 MB/s Sep 9 05:35:09.741740 kernel: raid6: .... xor() 6175 MB/s, rmw enabled Sep 9 05:35:09.741870 kernel: raid6: using avx2x2 recovery algorithm Sep 9 05:35:09.769237 kernel: xor: automatically using best checksumming function avx Sep 9 05:35:09.952269 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:35:09.963126 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:35:09.966503 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:10.001486 systemd-udevd[460]: Using default interface naming scheme 'v255'. Sep 9 05:35:10.008731 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:10.014022 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:35:10.048229 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Sep 9 05:35:10.086385 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:35:10.089243 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:35:10.160099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:10.164796 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:35:10.270230 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 9 05:35:10.285736 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Sep 9 05:35:10.298426 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 9 05:35:10.303220 kernel: libata version 3.00 loaded. Sep 9 05:35:10.305221 kernel: scsi host0: Virtio SCSI HBA Sep 9 05:35:10.323130 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:35:10.326762 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 9 05:35:10.340734 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:35:10.340807 kernel: GPT:9289727 != 125829119 Sep 9 05:35:10.340821 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:35:10.340832 kernel: GPT:9289727 != 125829119 Sep 9 05:35:10.340842 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:35:10.340854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:35:10.350214 kernel: scsi host1: ata_piix Sep 9 05:35:10.353210 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 05:35:10.361708 kernel: scsi host2: ata_piix Sep 9 05:35:10.362017 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Sep 9 05:35:10.362033 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Sep 9 05:35:10.362044 kernel: AES CTR mode by8 optimization enabled Sep 9 05:35:10.372632 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 9 05:35:10.379493 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 9 05:35:10.379439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:10.379565 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:10.382554 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:10.391231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:10.395013 kernel: ACPI: bus type USB registered Sep 9 05:35:10.395049 kernel: usbcore: registered new interface driver usbfs Sep 9 05:35:10.397695 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:10.403708 kernel: usbcore: registered new interface driver hub Sep 9 05:35:10.407522 kernel: usbcore: registered new device driver usb Sep 9 05:35:10.479869 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:10.600140 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:35:10.618959 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 9 05:35:10.619351 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 9 05:35:10.619498 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 9 05:35:10.619627 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 9 05:35:10.618902 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:35:10.625058 kernel: hub 1-0:1.0: USB hub found Sep 9 05:35:10.625359 kernel: hub 1-0:1.0: 2 ports detected Sep 9 05:35:10.634461 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:35:10.635716 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:35:10.640824 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:35:10.651792 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:35:10.653452 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:35:10.654240 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:10.655519 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:35:10.658266 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:35:10.661394 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:35:10.688034 disk-uuid[616]: Primary Header is updated. Sep 9 05:35:10.688034 disk-uuid[616]: Secondary Entries is updated. Sep 9 05:35:10.688034 disk-uuid[616]: Secondary Header is updated. Sep 9 05:35:10.695439 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:35:10.697058 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:35:10.707223 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:35:11.713421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:35:11.714774 disk-uuid[619]: The operation has completed successfully. Sep 9 05:35:11.791719 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:35:11.791898 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:35:11.843871 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:35:11.870564 sh[635]: Success Sep 9 05:35:11.899967 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:35:11.900066 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:35:11.903228 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:35:11.917230 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Sep 9 05:35:11.977108 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:35:11.980126 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:35:11.995394 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:35:12.012860 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (647) Sep 9 05:35:12.012961 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:35:12.016460 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:12.027181 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:35:12.027297 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:35:12.030830 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:35:12.032402 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:35:12.033752 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:35:12.036481 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:35:12.038586 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:35:12.070256 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (678) Sep 9 05:35:12.074450 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:12.074533 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:12.082297 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:35:12.082420 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:35:12.090253 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:12.093749 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:35:12.097401 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:35:12.218721 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:35:12.222413 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:35:12.279009 systemd-networkd[816]: lo: Link UP Sep 9 05:35:12.279025 systemd-networkd[816]: lo: Gained carrier Sep 9 05:35:12.302091 systemd-networkd[816]: Enumeration completed Sep 9 05:35:12.308234 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:35:12.310304 systemd[1]: Reached target network.target - Network. Sep 9 05:35:12.327912 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 9 05:35:12.327931 systemd-networkd[816]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 9 05:35:12.331893 systemd-networkd[816]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:12.331898 systemd-networkd[816]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:35:12.332880 systemd-networkd[816]: eth0: Link UP Sep 9 05:35:12.335341 systemd-networkd[816]: eth1: Link UP Sep 9 05:35:12.335727 systemd-networkd[816]: eth0: Gained carrier Sep 9 05:35:12.335753 systemd-networkd[816]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 9 05:35:12.339221 systemd-networkd[816]: eth1: Gained carrier Sep 9 05:35:12.339244 systemd-networkd[816]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:35:12.350463 systemd-networkd[816]: eth0: DHCPv4 address 24.199.106.51/20, gateway 24.199.96.1 acquired from 169.254.169.253 Sep 9 05:35:12.359634 ignition[725]: Ignition 2.22.0 Sep 9 05:35:12.359654 ignition[725]: Stage: fetch-offline Sep 9 05:35:12.359692 ignition[725]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:12.362719 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:35:12.359701 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:12.359854 ignition[725]: parsed url from cmdline: "" Sep 9 05:35:12.366491 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 05:35:12.359860 ignition[725]: no config URL provided Sep 9 05:35:12.359870 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:35:12.359883 ignition[725]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:35:12.359893 ignition[725]: failed to fetch config: resource requires networking Sep 9 05:35:12.370763 systemd-networkd[816]: eth1: DHCPv4 address 10.124.0.26/20 acquired from 169.254.169.253 Sep 9 05:35:12.360111 ignition[725]: Ignition finished successfully Sep 9 05:35:12.431092 ignition[825]: Ignition 2.22.0 Sep 9 05:35:12.432457 ignition[825]: Stage: fetch Sep 9 05:35:12.432744 ignition[825]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:12.432760 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:12.432888 ignition[825]: parsed url from cmdline: "" Sep 9 05:35:12.432894 ignition[825]: no config URL provided Sep 9 05:35:12.432904 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:35:12.432917 ignition[825]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:35:12.432977 ignition[825]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 9 05:35:12.450143 ignition[825]: GET result: OK Sep 9 05:35:12.451333 ignition[825]: parsing config with SHA512: 415d098d8558376d81a8b69504a650e9ee00c0a85253efca785d3279fa7f7e633baf553dcfdc5d2505f50cbf3d92c08a8afe1250fc89b929b6bdedbcaff9bfd8 Sep 9 05:35:12.459601 unknown[825]: fetched base config from "system" Sep 9 05:35:12.460293 unknown[825]: fetched base config from "system" Sep 9 05:35:12.460733 ignition[825]: fetch: fetch complete Sep 9 05:35:12.460302 unknown[825]: fetched user config from "digitalocean" Sep 9 05:35:12.460740 ignition[825]: fetch: fetch passed Sep 9 05:35:12.460819 ignition[825]: Ignition finished successfully Sep 9 05:35:12.465282 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 05:35:12.468439 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:35:12.525106 ignition[832]: Ignition 2.22.0 Sep 9 05:35:12.525130 ignition[832]: Stage: kargs Sep 9 05:35:12.525554 ignition[832]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:12.525574 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:12.527557 ignition[832]: kargs: kargs passed Sep 9 05:35:12.527655 ignition[832]: Ignition finished successfully Sep 9 05:35:12.532580 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:35:12.535237 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:35:12.588990 ignition[838]: Ignition 2.22.0 Sep 9 05:35:12.589010 ignition[838]: Stage: disks Sep 9 05:35:12.589315 ignition[838]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:12.589333 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:12.590438 ignition[838]: disks: disks passed Sep 9 05:35:12.593269 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:35:12.590526 ignition[838]: Ignition finished successfully Sep 9 05:35:12.595507 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:35:12.596987 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:35:12.597709 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:35:12.599027 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:35:12.600372 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:35:12.602666 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:35:12.635650 systemd-fsck[846]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:35:12.640259 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:35:12.646038 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:35:12.799260 kernel: EXT4-fs (vda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:35:12.799958 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:35:12.801105 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:35:12.803951 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:35:12.806413 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:35:12.810404 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Sep 9 05:35:12.818617 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 9 05:35:12.821313 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:35:12.822609 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:35:12.828142 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:35:12.831361 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:35:12.849266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (854) Sep 9 05:35:12.855217 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:12.855298 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:12.873242 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:35:12.873326 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:35:12.901565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:35:12.931986 initrd-setup-root[884]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:35:12.942377 coreos-metadata[856]: Sep 09 05:35:12.942 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 9 05:35:12.946418 coreos-metadata[857]: Sep 09 05:35:12.946 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 9 05:35:12.948858 initrd-setup-root[891]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:35:12.956932 initrd-setup-root[898]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:35:12.959304 coreos-metadata[857]: Sep 09 05:35:12.959 INFO Fetch successful Sep 9 05:35:12.961319 coreos-metadata[856]: Sep 09 05:35:12.961 INFO Fetch successful Sep 9 05:35:12.973478 initrd-setup-root[905]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:35:12.976798 coreos-metadata[857]: Sep 09 05:35:12.974 INFO wrote hostname ci-4452.0.0-n-41a4a07365 to /sysroot/etc/hostname Sep 9 05:35:12.978812 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 05:35:12.981858 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Sep 9 05:35:12.983015 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Sep 9 05:35:13.106422 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:35:13.109337 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:35:13.112462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:35:13.134799 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:35:13.136280 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:13.157430 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:35:13.176275 ignition[975]: INFO : Ignition 2.22.0 Sep 9 05:35:13.176275 ignition[975]: INFO : Stage: mount Sep 9 05:35:13.177633 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:13.177633 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:13.179979 ignition[975]: INFO : mount: mount passed Sep 9 05:35:13.179979 ignition[975]: INFO : Ignition finished successfully Sep 9 05:35:13.180757 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:35:13.183651 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:35:13.209550 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:35:13.244259 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (987) Sep 9 05:35:13.248136 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:35:13.248318 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:35:13.252329 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:35:13.252409 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:35:13.255558 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:35:13.294321 ignition[1003]: INFO : Ignition 2.22.0 Sep 9 05:35:13.294321 ignition[1003]: INFO : Stage: files Sep 9 05:35:13.295851 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:13.295851 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:13.297629 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:35:13.297629 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:35:13.297629 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:35:13.300451 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:35:13.300451 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:35:13.302486 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:35:13.302317 unknown[1003]: wrote ssh authorized keys file for user: core Sep 9 05:35:13.306229 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 05:35:13.306229 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 05:35:13.424104 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:35:13.963395 systemd-networkd[816]: eth0: Gained IPv6LL Sep 9 05:35:14.110108 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 05:35:14.110108 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:35:14.113250 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:35:14.113250 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:35:14.113250 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:35:14.113250 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:35:14.113250 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:35:14.113250 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:35:14.113250 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:35:14.128004 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:35:14.128004 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:35:14.128004 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:35:14.128004 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:35:14.128004 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:35:14.128004 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 05:35:14.347485 systemd-networkd[816]: eth1: Gained IPv6LL Sep 9 05:35:14.755998 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 05:35:17.075145 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 05:35:17.075145 ignition[1003]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 05:35:17.078407 ignition[1003]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:35:17.082167 ignition[1003]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:35:17.082167 ignition[1003]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 05:35:17.082167 ignition[1003]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:35:17.086136 ignition[1003]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:35:17.086136 ignition[1003]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:35:17.086136 ignition[1003]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:35:17.086136 ignition[1003]: INFO : files: files passed Sep 9 05:35:17.086136 ignition[1003]: INFO : Ignition finished successfully Sep 9 05:35:17.088303 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:35:17.092766 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:35:17.096416 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:35:17.118975 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:35:17.120305 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:35:17.126131 initrd-setup-root-after-ignition[1034]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:17.126131 initrd-setup-root-after-ignition[1034]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:17.128328 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:35:17.128873 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:35:17.130395 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:35:17.132093 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:35:17.208701 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:35:17.208879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:35:17.210972 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:35:17.212275 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:35:17.213859 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:35:17.215412 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:35:17.248105 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:35:17.251964 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:35:17.274753 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:17.276483 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:17.278398 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:35:17.279700 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:35:17.279873 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:35:17.282428 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:35:17.283105 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:35:17.285469 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:35:17.287248 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:35:17.289180 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:35:17.291247 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:35:17.292132 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:35:17.292987 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:35:17.294649 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:35:17.296026 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:35:17.297492 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:35:17.298965 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:35:17.299171 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:35:17.300617 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:17.301392 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:17.302609 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:35:17.302731 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:17.303890 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:35:17.304130 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:35:17.305836 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:35:17.306083 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:35:17.307379 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:35:17.307585 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:35:17.308471 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 9 05:35:17.308574 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 9 05:35:17.312325 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:35:17.313087 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:35:17.313290 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:17.318534 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:35:17.319672 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:35:17.319888 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:17.322559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:35:17.322689 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:35:17.337537 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:35:17.338371 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:35:17.353963 ignition[1058]: INFO : Ignition 2.22.0 Sep 9 05:35:17.355291 ignition[1058]: INFO : Stage: umount Sep 9 05:35:17.355291 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:35:17.355291 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 9 05:35:17.358556 ignition[1058]: INFO : umount: umount passed Sep 9 05:35:17.358556 ignition[1058]: INFO : Ignition finished successfully Sep 9 05:35:17.361987 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:35:17.364173 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:35:17.364890 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:35:17.366382 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:35:17.367065 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:35:17.368274 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:35:17.368346 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:35:17.368862 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:35:17.368899 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:35:17.369503 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 05:35:17.369545 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 05:35:17.370977 systemd[1]: Stopped target network.target - Network. Sep 9 05:35:17.371937 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:35:17.372011 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:35:17.373104 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:35:17.374162 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:35:17.379341 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:17.380797 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:35:17.381494 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:35:17.382556 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:35:17.382617 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:35:17.383573 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:35:17.383623 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:35:17.384524 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:35:17.384594 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:35:17.385828 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:35:17.385918 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:35:17.386811 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:35:17.386896 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:35:17.388238 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:35:17.389493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:35:17.397602 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:35:17.397784 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:35:17.403838 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:35:17.404270 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:35:17.404409 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:35:17.407844 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:35:17.409221 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:35:17.410141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:35:17.410301 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:17.414352 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:35:17.415092 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:35:17.415210 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:35:17.417800 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:35:17.417895 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:17.422378 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:35:17.422492 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:17.424811 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:35:17.424921 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:17.427239 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:17.431961 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:35:17.432099 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:17.442941 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:35:17.449617 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:17.451544 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:35:17.451693 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:17.453119 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:35:17.453177 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:17.454686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:35:17.454771 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:35:17.457390 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:35:17.457513 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:35:17.458985 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:35:17.459079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:35:17.462057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:35:17.464561 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:35:17.464683 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:17.468749 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:35:17.468854 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:17.471741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:17.471839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:17.475520 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 05:35:17.475641 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 05:35:17.475709 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:17.476340 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:35:17.478374 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:35:17.490011 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:35:17.490288 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:35:17.492333 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:35:17.501003 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:35:17.527637 systemd[1]: Switching root. Sep 9 05:35:17.575058 systemd-journald[212]: Journal stopped Sep 9 05:35:19.064324 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Sep 9 05:35:19.064437 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:35:19.064453 kernel: SELinux: policy capability open_perms=1 Sep 9 05:35:19.064470 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:35:19.064487 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:35:19.064498 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:35:19.064515 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:35:19.064525 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:35:19.064537 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:35:19.064555 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:35:19.064576 kernel: audit: type=1403 audit(1757396117.883:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:35:19.064599 systemd[1]: Successfully loaded SELinux policy in 73.204ms. Sep 9 05:35:19.064633 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.366ms. Sep 9 05:35:19.064647 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:35:19.064661 systemd[1]: Detected virtualization kvm. Sep 9 05:35:19.064672 systemd[1]: Detected architecture x86-64. Sep 9 05:35:19.064683 systemd[1]: Detected first boot. Sep 9 05:35:19.064695 systemd[1]: Hostname set to . Sep 9 05:35:19.064706 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:35:19.064719 zram_generator::config[1101]: No configuration found. Sep 9 05:35:19.064737 kernel: Guest personality initialized and is inactive Sep 9 05:35:19.064754 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:35:19.064765 kernel: Initialized host personality Sep 9 05:35:19.065644 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:35:19.065676 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:35:19.065697 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:35:19.065716 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:35:19.065729 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:35:19.065759 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:35:19.065778 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:35:19.065798 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:35:19.065821 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:35:19.065833 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:35:19.065845 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:35:19.065858 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:35:19.065882 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:35:19.065900 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:35:19.065922 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:35:19.065942 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:35:19.065960 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:35:19.065978 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:35:19.065997 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:35:19.066014 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:35:19.066032 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:35:19.066053 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:35:19.066065 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:35:19.066083 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:35:19.066094 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:35:19.066106 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:35:19.066117 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:35:19.066129 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:35:19.066141 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:35:19.066161 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:35:19.066173 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:35:19.066210 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:35:19.066230 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:35:19.066249 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:35:19.066263 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:35:19.066275 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:35:19.066286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:35:19.066297 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:35:19.066311 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:35:19.066323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:35:19.066334 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:35:19.066346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:19.066357 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:35:19.067231 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:35:19.067263 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:35:19.067276 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:35:19.067288 systemd[1]: Reached target machines.target - Containers. Sep 9 05:35:19.067305 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:35:19.067317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:19.067330 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:35:19.067345 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:35:19.067359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:19.067390 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:35:19.067409 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:19.067429 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:35:19.067445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:19.067458 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:35:19.067469 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:35:19.067482 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:35:19.067493 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:35:19.067505 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:35:19.067517 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:19.067529 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:35:19.067546 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:35:19.067559 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:35:19.067571 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:35:19.067583 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:35:19.067595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:35:19.067610 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:35:19.067621 systemd[1]: Stopped verity-setup.service. Sep 9 05:35:19.067633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:19.067645 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:35:19.067656 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:35:19.067675 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:35:19.067690 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:35:19.067701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:35:19.067713 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:35:19.067724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:35:19.067735 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:35:19.067746 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:35:19.067758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:19.067769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:19.067783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:19.067795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:19.067806 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:35:19.067818 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:35:19.067875 systemd-journald[1170]: Collecting audit messages is disabled. Sep 9 05:35:19.067910 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:35:19.067921 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:35:19.067935 systemd-journald[1170]: Journal started Sep 9 05:35:19.067963 systemd-journald[1170]: Runtime Journal (/run/log/journal/ec1e1fd7207448b1bdefac00b423ccd0) is 4.9M, max 39.5M, 34.6M free. Sep 9 05:35:19.071788 kernel: loop: module loaded Sep 9 05:35:18.699940 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:35:18.727317 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:35:18.728050 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:35:19.075989 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:35:19.082430 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:35:19.086227 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:35:19.092975 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:35:19.111224 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:35:19.111347 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:19.125240 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:35:19.125347 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:35:19.136222 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:35:19.140374 kernel: fuse: init (API version 7.41) Sep 9 05:35:19.140460 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:35:19.156646 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:35:19.156739 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:35:19.159788 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:19.171576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:19.173931 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:35:19.175516 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:35:19.185338 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:35:19.187820 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:35:19.210898 kernel: ACPI: bus type drm_connector registered Sep 9 05:35:19.212336 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:35:19.218484 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:35:19.226397 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:35:19.252639 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:35:19.258894 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:35:19.268522 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 05:35:19.260342 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:35:19.261087 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:35:19.262582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:35:19.264771 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:35:19.267701 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:35:19.309200 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:35:19.329683 systemd-journald[1170]: Time spent on flushing to /var/log/journal/ec1e1fd7207448b1bdefac00b423ccd0 is 53.512ms for 1016 entries. Sep 9 05:35:19.329683 systemd-journald[1170]: System Journal (/var/log/journal/ec1e1fd7207448b1bdefac00b423ccd0) is 8M, max 195.6M, 187.6M free. Sep 9 05:35:19.406409 systemd-journald[1170]: Received client request to flush runtime journal. Sep 9 05:35:19.406500 kernel: loop1: detected capacity change from 0 to 8 Sep 9 05:35:19.406540 kernel: loop2: detected capacity change from 0 to 110984 Sep 9 05:35:19.328629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:35:19.366725 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:35:19.389108 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:35:19.400281 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:35:19.411634 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:35:19.427231 kernel: loop3: detected capacity change from 0 to 224512 Sep 9 05:35:19.486334 kernel: loop4: detected capacity change from 0 to 128016 Sep 9 05:35:19.505811 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:35:19.510984 kernel: loop5: detected capacity change from 0 to 8 Sep 9 05:35:19.512874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:35:19.514223 kernel: loop6: detected capacity change from 0 to 110984 Sep 9 05:35:19.532246 kernel: loop7: detected capacity change from 0 to 224512 Sep 9 05:35:19.578976 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 9 05:35:19.580110 (sd-merge)[1246]: Merged extensions into '/usr'. Sep 9 05:35:19.604992 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:35:19.605020 systemd[1]: Reloading... Sep 9 05:35:19.627613 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 9 05:35:19.627644 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 9 05:35:19.817283 zram_generator::config[1276]: No configuration found. Sep 9 05:35:20.264975 ldconfig[1188]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:35:20.417503 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:35:20.418044 systemd[1]: Reloading finished in 812 ms. Sep 9 05:35:20.431708 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:35:20.433002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:35:20.434744 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:35:20.449480 systemd[1]: Starting ensure-sysext.service... Sep 9 05:35:20.452472 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:35:20.489139 systemd[1]: Reload requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:35:20.489164 systemd[1]: Reloading... Sep 9 05:35:20.494720 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:35:20.494765 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:35:20.495026 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:35:20.495298 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:35:20.496426 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:35:20.496675 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Sep 9 05:35:20.496727 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Sep 9 05:35:20.502710 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:35:20.502756 systemd-tmpfiles[1321]: Skipping /boot Sep 9 05:35:20.516978 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:35:20.516999 systemd-tmpfiles[1321]: Skipping /boot Sep 9 05:35:20.575218 zram_generator::config[1345]: No configuration found. Sep 9 05:35:20.838658 systemd[1]: Reloading finished in 348 ms. Sep 9 05:35:20.850933 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:35:20.858301 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:35:20.867410 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:20.870466 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:35:20.873495 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:35:20.878970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:35:20.887824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:35:20.894569 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:35:20.903554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:20.905025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:20.906652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:35:20.910402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:35:20.916359 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:35:20.917494 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:20.917660 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:20.917788 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:20.924376 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:35:20.928925 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:20.929227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:20.929447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:20.929549 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:20.929699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:20.934304 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:20.934606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:35:20.947691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:35:20.949458 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:35:20.949599 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:35:20.949745 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:35:20.955899 systemd[1]: Finished ensure-sysext.service. Sep 9 05:35:20.968561 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:35:20.989152 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:35:21.000098 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:35:21.001638 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:35:21.008707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:35:21.013028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:35:21.023883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:35:21.033427 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:35:21.034476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:35:21.035618 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:35:21.049776 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:35:21.058515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:35:21.059276 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:35:21.062704 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:35:21.065413 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:35:21.066359 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:35:21.067141 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:35:21.073582 systemd-udevd[1398]: Using default interface naming scheme 'v255'. Sep 9 05:35:21.091391 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:35:21.114718 augenrules[1441]: No rules Sep 9 05:35:21.118317 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:21.118636 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:21.140234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:35:21.145449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:35:21.218483 systemd-resolved[1397]: Positive Trust Anchors: Sep 9 05:35:21.221652 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:35:21.221699 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:35:21.237775 systemd-resolved[1397]: Using system hostname 'ci-4452.0.0-n-41a4a07365'. Sep 9 05:35:21.246801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:35:21.247476 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:35:21.255442 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:35:21.256471 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:35:21.257380 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:35:21.258348 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:35:21.259164 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:35:21.259721 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:35:21.260615 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:35:21.260657 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:35:21.261403 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:35:21.271346 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:35:21.272273 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:35:21.273267 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:35:21.274959 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:35:21.277584 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:35:21.281764 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:35:21.283909 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:35:21.284806 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:35:21.294855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:35:21.296893 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:35:21.301225 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:35:21.307523 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:35:21.309089 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:35:21.310354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:35:21.310417 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:35:21.313121 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 05:35:21.319538 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:35:21.320640 systemd-networkd[1447]: lo: Link UP Sep 9 05:35:21.321003 systemd-networkd[1447]: lo: Gained carrier Sep 9 05:35:21.321994 systemd-networkd[1447]: Enumeration completed Sep 9 05:35:21.325673 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:35:21.330430 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:35:21.336437 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:35:21.338346 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:35:21.349300 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:35:21.355484 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:35:21.371443 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:35:21.371544 oslogin_cache_refresh[1482]: Refreshing passwd entry cache Sep 9 05:35:21.373369 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Refreshing passwd entry cache Sep 9 05:35:21.376708 jq[1480]: false Sep 9 05:35:21.377510 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:35:21.377692 oslogin_cache_refresh[1482]: Failure getting users, quitting Sep 9 05:35:21.378700 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Failure getting users, quitting Sep 9 05:35:21.378700 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:35:21.378700 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Refreshing group entry cache Sep 9 05:35:21.378700 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Failure getting groups, quitting Sep 9 05:35:21.378700 google_oslogin_nss_cache[1482]: oslogin_cache_refresh[1482]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:35:21.377717 oslogin_cache_refresh[1482]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:35:21.377776 oslogin_cache_refresh[1482]: Refreshing group entry cache Sep 9 05:35:21.378303 oslogin_cache_refresh[1482]: Failure getting groups, quitting Sep 9 05:35:21.378312 oslogin_cache_refresh[1482]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:35:21.382509 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:35:21.396375 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:35:21.399870 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:35:21.401567 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:35:21.409633 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:35:21.419480 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:35:21.422067 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:35:21.426555 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:35:21.427904 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:35:21.429131 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:35:21.429720 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:35:21.429994 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:35:21.439071 systemd[1]: Reached target network.target - Network. Sep 9 05:35:21.456312 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:35:21.462546 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:35:21.467303 jq[1496]: true Sep 9 05:35:21.468364 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:35:21.468597 extend-filesystems[1481]: Found /dev/vda6 Sep 9 05:35:21.477660 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:35:21.477910 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:35:21.502284 extend-filesystems[1481]: Found /dev/vda9 Sep 9 05:35:21.556810 extend-filesystems[1481]: Checking size of /dev/vda9 Sep 9 05:35:21.563226 update_engine[1495]: I20250909 05:35:21.560945 1495 main.cc:92] Flatcar Update Engine starting Sep 9 05:35:21.571679 coreos-metadata[1477]: Sep 09 05:35:21.568 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 9 05:35:21.579907 jq[1511]: true Sep 9 05:35:21.580367 coreos-metadata[1477]: Sep 09 05:35:21.580 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Sep 9 05:35:21.584978 dbus-daemon[1478]: [system] SELinux support is enabled Sep 9 05:35:21.585443 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:35:21.598546 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:35:21.598596 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:35:21.602924 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:35:21.602955 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:35:21.606945 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:35:21.609104 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:35:21.631552 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:35:21.635369 tar[1508]: linux-amd64/LICENSE Sep 9 05:35:21.635369 tar[1508]: linux-amd64/helm Sep 9 05:35:21.638536 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:35:21.640943 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:35:21.643101 extend-filesystems[1481]: Resized partition /dev/vda9 Sep 9 05:35:21.666238 extend-filesystems[1545]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:35:21.647663 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:35:21.648437 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:35:21.671969 update_engine[1495]: I20250909 05:35:21.671463 1495 update_check_scheduler.cc:74] Next update check in 11m21s Sep 9 05:35:21.683794 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 9 05:35:21.723758 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Sep 9 05:35:21.731492 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 9 05:35:21.732274 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:35:21.760385 bash[1546]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:35:21.763710 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:35:21.773773 systemd[1]: Starting sshkeys.service... Sep 9 05:35:21.815854 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 9 05:35:21.869854 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:35:21.869854 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 9 05:35:21.869854 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 9 05:35:21.885391 kernel: ISO 9660 Extensions: RRIP_1991A Sep 9 05:35:21.852653 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:35:21.885509 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Sep 9 05:35:21.852907 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:35:21.888450 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 05:35:21.894102 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 05:35:21.938031 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 9 05:35:21.944977 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 9 05:35:21.952879 systemd-networkd[1447]: eth0: Configuring with /run/systemd/network/10-12:5d:a5:d1:d5:d1.network. Sep 9 05:35:21.997701 systemd-networkd[1447]: eth0: Link UP Sep 9 05:35:21.997918 systemd-networkd[1447]: eth0: Gained carrier Sep 9 05:35:22.008781 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 9 05:35:22.044395 systemd-networkd[1447]: eth1: Configuring with /run/systemd/network/10-52:87:ca:d7:9e:da.network. Sep 9 05:35:22.050617 systemd-networkd[1447]: eth1: Link UP Sep 9 05:35:22.050619 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 9 05:35:22.050823 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 9 05:35:22.051362 systemd-networkd[1447]: eth1: Gained carrier Sep 9 05:35:22.057529 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 9 05:35:22.058730 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 9 05:35:22.069756 coreos-metadata[1555]: Sep 09 05:35:22.069 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 9 05:35:22.084877 coreos-metadata[1555]: Sep 09 05:35:22.084 INFO Fetch successful Sep 9 05:35:22.098957 unknown[1555]: wrote ssh authorized keys file for user: core Sep 9 05:35:22.138299 update-ssh-keys[1580]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:35:22.140450 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 05:35:22.143093 systemd-logind[1492]: New seat seat0. Sep 9 05:35:22.143649 systemd[1]: Finished sshkeys.service. Sep 9 05:35:22.144752 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:35:22.183882 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:35:22.188844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:35:22.198789 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:35:22.263301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:35:22.276932 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:35:22.368594 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 05:35:22.377236 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 9 05:35:22.383399 containerd[1536]: time="2025-09-09T05:35:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:35:22.385810 containerd[1536]: time="2025-09-09T05:35:22.385756788Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:35:22.392223 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:35:22.396286 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 05:35:22.425226 containerd[1536]: time="2025-09-09T05:35:22.424756678Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.143µs" Sep 9 05:35:22.425226 containerd[1536]: time="2025-09-09T05:35:22.424834634Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:35:22.425226 containerd[1536]: time="2025-09-09T05:35:22.424906707Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:35:22.426361 containerd[1536]: time="2025-09-09T05:35:22.426239563Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:35:22.427215 containerd[1536]: time="2025-09-09T05:35:22.426841758Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:35:22.427215 containerd[1536]: time="2025-09-09T05:35:22.426912134Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:35:22.427215 containerd[1536]: time="2025-09-09T05:35:22.427093744Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:35:22.427215 containerd[1536]: time="2025-09-09T05:35:22.427114648Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428107848Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428174978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428225076Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428240940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428432337Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428746995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428797640Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428838927Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:35:22.429286 containerd[1536]: time="2025-09-09T05:35:22.428936810Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:35:22.431599 containerd[1536]: time="2025-09-09T05:35:22.431553627Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:35:22.432089 containerd[1536]: time="2025-09-09T05:35:22.432056196Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:35:22.437425 containerd[1536]: time="2025-09-09T05:35:22.437370955Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438106314Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438142039Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438156639Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438169475Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438181496Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438215732Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438228861Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438239933Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438250171Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438260067Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438273494Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438417654Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438442843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:35:22.440220 containerd[1536]: time="2025-09-09T05:35:22.438458570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438470254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438482655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438493774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438518685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438530208Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438541685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438551690Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438561507Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438633232Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438648055Z" level=info msg="Start snapshots syncer" Sep 9 05:35:22.440590 containerd[1536]: time="2025-09-09T05:35:22.438676397Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:35:22.440803 containerd[1536]: time="2025-09-09T05:35:22.438961520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:35:22.440803 containerd[1536]: time="2025-09-09T05:35:22.439181865Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439557068Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439688608Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439716647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439733839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439748043Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439766317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439781614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439797797Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439846014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439865016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439878555Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439923141Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439945896Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:35:22.441004 containerd[1536]: time="2025-09-09T05:35:22.439956239Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.439969832Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.439981108Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.439994317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.440006366Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.440027025Z" level=info msg="runtime interface created" Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.440033265Z" level=info msg="created NRI interface" Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.440046881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.440061407Z" level=info msg="Connect containerd service" Sep 9 05:35:22.441462 containerd[1536]: time="2025-09-09T05:35:22.440098517Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:35:22.448228 containerd[1536]: time="2025-09-09T05:35:22.446522614Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:35:22.595986 coreos-metadata[1477]: Sep 09 05:35:22.595 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Sep 9 05:35:22.619122 coreos-metadata[1477]: Sep 09 05:35:22.618 INFO Fetch successful Sep 9 05:35:22.697796 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:35:22.724922 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 05:35:22.727108 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:35:22.785566 containerd[1536]: time="2025-09-09T05:35:22.785506968Z" level=info msg="Start subscribing containerd event" Sep 9 05:35:22.786325 containerd[1536]: time="2025-09-09T05:35:22.786277751Z" level=info msg="Start recovering state" Sep 9 05:35:22.786626 containerd[1536]: time="2025-09-09T05:35:22.786598807Z" level=info msg="Start event monitor" Sep 9 05:35:22.786626 containerd[1536]: time="2025-09-09T05:35:22.786623397Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:35:22.786696 containerd[1536]: time="2025-09-09T05:35:22.786635159Z" level=info msg="Start streaming server" Sep 9 05:35:22.786696 containerd[1536]: time="2025-09-09T05:35:22.786655983Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:35:22.786696 containerd[1536]: time="2025-09-09T05:35:22.786666071Z" level=info msg="runtime interface starting up..." Sep 9 05:35:22.786696 containerd[1536]: time="2025-09-09T05:35:22.786673647Z" level=info msg="starting plugins..." Sep 9 05:35:22.786779 containerd[1536]: time="2025-09-09T05:35:22.786697334Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:35:22.788795 containerd[1536]: time="2025-09-09T05:35:22.788573685Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:35:22.788795 containerd[1536]: time="2025-09-09T05:35:22.788691006Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:35:22.789108 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:35:22.790664 containerd[1536]: time="2025-09-09T05:35:22.788950413Z" level=info msg="containerd successfully booted in 0.406018s" Sep 9 05:35:22.935308 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:35:22.972276 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:35:22.976819 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:35:22.981058 systemd[1]: Started sshd@0-24.199.106.51:22-139.178.89.65:37176.service - OpenSSH per-connection server daemon (139.178.89.65:37176). Sep 9 05:35:23.024314 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:35:23.024737 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:35:23.031346 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:35:23.054402 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 9 05:35:23.054514 systemd-networkd[1447]: eth0: Gained IPv6LL Sep 9 05:35:23.056137 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 9 05:35:23.061093 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:35:23.063884 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:35:23.071560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:23.078372 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 9 05:35:23.077458 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:35:23.098102 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:35:23.104072 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:35:23.172917 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:35:23.173225 kernel: Console: switching to colour dummy device 80x25 Sep 9 05:35:23.174665 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:35:23.199900 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:35:23.204232 tar[1508]: linux-amd64/README.md Sep 9 05:35:23.216874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:23.222515 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 9 05:35:23.222616 kernel: [drm] features: -context_init Sep 9 05:35:23.222691 sshd[1628]: Accepted publickey for core from 139.178.89.65 port 37176 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:23.228637 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:23.244937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:35:23.252534 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:35:23.253384 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:35:23.279810 systemd-logind[1492]: New session 1 of user core. Sep 9 05:35:23.287544 kernel: [drm] number of scanouts: 1 Sep 9 05:35:23.287726 kernel: [drm] number of cap sets: 0 Sep 9 05:35:23.290219 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Sep 9 05:35:23.295678 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 9 05:35:23.299251 kernel: Console: switching to colour frame buffer device 128x48 Sep 9 05:35:23.304248 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 9 05:35:23.321996 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:35:23.332694 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:35:23.353481 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:35:23.363149 systemd-logind[1492]: New session c1 of user core. Sep 9 05:35:23.469495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:23.471373 systemd-logind[1492]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:35:23.489980 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:35:23.543563 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:23.543758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:23.545264 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:23.548489 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:23.553675 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:35:23.582022 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:35:23.583361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:23.589394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:35:23.633839 systemd[1660]: Queued start job for default target default.target. Sep 9 05:35:23.639493 kernel: EDAC MC: Ver: 3.0.0 Sep 9 05:35:23.645967 systemd[1660]: Created slice app.slice - User Application Slice. Sep 9 05:35:23.646014 systemd[1660]: Reached target paths.target - Paths. Sep 9 05:35:23.646062 systemd[1660]: Reached target timers.target - Timers. Sep 9 05:35:23.649446 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:35:23.679510 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:35:23.689082 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:35:23.689251 systemd[1660]: Reached target sockets.target - Sockets. Sep 9 05:35:23.689678 systemd[1660]: Reached target basic.target - Basic System. Sep 9 05:35:23.689741 systemd[1660]: Reached target default.target - Main User Target. Sep 9 05:35:23.689773 systemd[1660]: Startup finished in 298ms. Sep 9 05:35:23.689901 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:35:23.695512 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:35:23.772450 systemd[1]: Started sshd@1-24.199.106.51:22-139.178.89.65:37190.service - OpenSSH per-connection server daemon (139.178.89.65:37190). Sep 9 05:35:23.867322 sshd[1685]: Accepted publickey for core from 139.178.89.65 port 37190 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:23.869081 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:23.880478 systemd-logind[1492]: New session 2 of user core. Sep 9 05:35:23.883504 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:35:23.948662 sshd[1688]: Connection closed by 139.178.89.65 port 37190 Sep 9 05:35:23.950412 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:23.961054 systemd[1]: sshd@1-24.199.106.51:22-139.178.89.65:37190.service: Deactivated successfully. Sep 9 05:35:23.963838 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:35:23.965594 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:35:23.970606 systemd-logind[1492]: Removed session 2. Sep 9 05:35:23.972310 systemd[1]: Started sshd@2-24.199.106.51:22-139.178.89.65:37192.service - OpenSSH per-connection server daemon (139.178.89.65:37192). Sep 9 05:35:24.055744 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 37192 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:24.057988 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:24.064515 systemd-logind[1492]: New session 3 of user core. Sep 9 05:35:24.069430 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:35:24.075445 systemd-networkd[1447]: eth1: Gained IPv6LL Sep 9 05:35:24.076244 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 9 05:35:24.137794 sshd[1697]: Connection closed by 139.178.89.65 port 37192 Sep 9 05:35:24.143436 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:24.148700 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:35:24.149371 systemd[1]: sshd@2-24.199.106.51:22-139.178.89.65:37192.service: Deactivated successfully. Sep 9 05:35:24.152635 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:35:24.156479 systemd-logind[1492]: Removed session 3. Sep 9 05:35:24.675702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:24.676502 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:35:24.678000 systemd[1]: Startup finished in 4.227s (kernel) + 9.159s (initrd) + 6.864s (userspace) = 20.251s. Sep 9 05:35:24.688704 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:35:25.383321 kubelet[1707]: E0909 05:35:25.383229 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:35:25.385578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:35:25.385761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:35:25.386144 systemd[1]: kubelet.service: Consumed 1.507s CPU time, 262.6M memory peak. Sep 9 05:35:34.151729 systemd[1]: Started sshd@3-24.199.106.51:22-139.178.89.65:38854.service - OpenSSH per-connection server daemon (139.178.89.65:38854). Sep 9 05:35:34.221982 sshd[1719]: Accepted publickey for core from 139.178.89.65 port 38854 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:34.223692 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:34.232290 systemd-logind[1492]: New session 4 of user core. Sep 9 05:35:34.238435 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:35:34.302328 sshd[1722]: Connection closed by 139.178.89.65 port 38854 Sep 9 05:35:34.301550 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:34.312688 systemd[1]: sshd@3-24.199.106.51:22-139.178.89.65:38854.service: Deactivated successfully. Sep 9 05:35:34.315669 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:35:34.317158 systemd-logind[1492]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:35:34.321467 systemd[1]: Started sshd@4-24.199.106.51:22-139.178.89.65:38870.service - OpenSSH per-connection server daemon (139.178.89.65:38870). Sep 9 05:35:34.324383 systemd-logind[1492]: Removed session 4. Sep 9 05:35:34.388421 sshd[1728]: Accepted publickey for core from 139.178.89.65 port 38870 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:34.390533 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:34.398285 systemd-logind[1492]: New session 5 of user core. Sep 9 05:35:34.405536 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:35:34.465302 sshd[1731]: Connection closed by 139.178.89.65 port 38870 Sep 9 05:35:34.465936 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:34.477525 systemd[1]: sshd@4-24.199.106.51:22-139.178.89.65:38870.service: Deactivated successfully. Sep 9 05:35:34.480020 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:35:34.480946 systemd-logind[1492]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:35:34.485939 systemd[1]: Started sshd@5-24.199.106.51:22-139.178.89.65:38876.service - OpenSSH per-connection server daemon (139.178.89.65:38876). Sep 9 05:35:34.487678 systemd-logind[1492]: Removed session 5. Sep 9 05:35:34.560241 sshd[1737]: Accepted publickey for core from 139.178.89.65 port 38876 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:34.562364 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:34.568744 systemd-logind[1492]: New session 6 of user core. Sep 9 05:35:34.580465 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:35:34.643042 sshd[1740]: Connection closed by 139.178.89.65 port 38876 Sep 9 05:35:34.643833 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:34.662932 systemd[1]: sshd@5-24.199.106.51:22-139.178.89.65:38876.service: Deactivated successfully. Sep 9 05:35:34.666262 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:35:34.667771 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:35:34.672113 systemd[1]: Started sshd@6-24.199.106.51:22-139.178.89.65:38878.service - OpenSSH per-connection server daemon (139.178.89.65:38878). Sep 9 05:35:34.673951 systemd-logind[1492]: Removed session 6. Sep 9 05:35:34.743540 sshd[1746]: Accepted publickey for core from 139.178.89.65 port 38878 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:34.745057 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:34.750951 systemd-logind[1492]: New session 7 of user core. Sep 9 05:35:34.766477 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:35:34.839432 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:35:34.839776 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:34.857806 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:34.861318 sshd[1749]: Connection closed by 139.178.89.65 port 38878 Sep 9 05:35:34.862029 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:34.876557 systemd[1]: sshd@6-24.199.106.51:22-139.178.89.65:38878.service: Deactivated successfully. Sep 9 05:35:34.878851 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:35:34.879963 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:35:34.884434 systemd[1]: Started sshd@7-24.199.106.51:22-139.178.89.65:38886.service - OpenSSH per-connection server daemon (139.178.89.65:38886). Sep 9 05:35:34.886945 systemd-logind[1492]: Removed session 7. Sep 9 05:35:34.953518 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 38886 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:34.955438 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:34.961847 systemd-logind[1492]: New session 8 of user core. Sep 9 05:35:34.969509 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:35:35.034139 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:35:35.035257 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:35.042468 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:35.050153 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:35:35.050620 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:35.062412 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:35:35.112883 augenrules[1783]: No rules Sep 9 05:35:35.114649 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:35:35.115046 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:35:35.117740 sudo[1760]: pam_unix(sudo:session): session closed for user root Sep 9 05:35:35.121372 sshd[1759]: Connection closed by 139.178.89.65 port 38886 Sep 9 05:35:35.121997 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Sep 9 05:35:35.132757 systemd[1]: sshd@7-24.199.106.51:22-139.178.89.65:38886.service: Deactivated successfully. Sep 9 05:35:35.135352 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:35:35.136968 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:35:35.140577 systemd[1]: Started sshd@8-24.199.106.51:22-139.178.89.65:38888.service - OpenSSH per-connection server daemon (139.178.89.65:38888). Sep 9 05:35:35.141865 systemd-logind[1492]: Removed session 8. Sep 9 05:35:35.207336 sshd[1792]: Accepted publickey for core from 139.178.89.65 port 38888 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:35:35.208403 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:35:35.214926 systemd-logind[1492]: New session 9 of user core. Sep 9 05:35:35.225547 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:35:35.285572 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:35:35.285934 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:35:35.636990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:35:35.645863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:35.831579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:35.842676 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:35:35.907779 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:35:35.924076 (dockerd)[1829]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:35:35.939271 kubelet[1823]: E0909 05:35:35.939177 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:35:35.944508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:35:35.944879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:35:35.945698 systemd[1]: kubelet.service: Consumed 218ms CPU time, 108.5M memory peak. Sep 9 05:35:36.350075 dockerd[1829]: time="2025-09-09T05:35:36.349928133Z" level=info msg="Starting up" Sep 9 05:35:36.352218 dockerd[1829]: time="2025-09-09T05:35:36.351641966Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:35:36.380948 dockerd[1829]: time="2025-09-09T05:35:36.380854013Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:35:36.441563 dockerd[1829]: time="2025-09-09T05:35:36.441501982Z" level=info msg="Loading containers: start." Sep 9 05:35:36.455247 kernel: Initializing XFRM netlink socket Sep 9 05:35:36.770916 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Sep 9 05:35:37.348845 systemd-timesyncd[1412]: Contacted time server 23.142.248.9:123 (2.flatcar.pool.ntp.org). Sep 9 05:35:37.348911 systemd-timesyncd[1412]: Initial clock synchronization to Tue 2025-09-09 05:35:37.348446 UTC. Sep 9 05:35:37.348949 systemd-resolved[1397]: Clock change detected. Flushing caches. Sep 9 05:35:37.363483 systemd-networkd[1447]: docker0: Link UP Sep 9 05:35:37.368136 dockerd[1829]: time="2025-09-09T05:35:37.368038428Z" level=info msg="Loading containers: done." Sep 9 05:35:37.385958 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1748454833-merged.mount: Deactivated successfully. Sep 9 05:35:37.387487 dockerd[1829]: time="2025-09-09T05:35:37.386468981Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:35:37.387487 dockerd[1829]: time="2025-09-09T05:35:37.386621843Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:35:37.387487 dockerd[1829]: time="2025-09-09T05:35:37.386762370Z" level=info msg="Initializing buildkit" Sep 9 05:35:37.418665 dockerd[1829]: time="2025-09-09T05:35:37.418611105Z" level=info msg="Completed buildkit initialization" Sep 9 05:35:37.431432 dockerd[1829]: time="2025-09-09T05:35:37.431369728Z" level=info msg="Daemon has completed initialization" Sep 9 05:35:37.431762 dockerd[1829]: time="2025-09-09T05:35:37.431700717Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:35:37.432538 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:35:38.337491 containerd[1536]: time="2025-09-09T05:35:38.337396675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 05:35:38.999987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223379253.mount: Deactivated successfully. Sep 9 05:35:40.499224 containerd[1536]: time="2025-09-09T05:35:40.499146101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:40.500323 containerd[1536]: time="2025-09-09T05:35:40.500272387Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 05:35:40.501597 containerd[1536]: time="2025-09-09T05:35:40.500938147Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:40.507588 containerd[1536]: time="2025-09-09T05:35:40.507099086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:40.511002 containerd[1536]: time="2025-09-09T05:35:40.510935135Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.173446187s" Sep 9 05:35:40.511002 containerd[1536]: time="2025-09-09T05:35:40.511000158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 05:35:40.512271 containerd[1536]: time="2025-09-09T05:35:40.512215663Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 05:35:42.161588 containerd[1536]: time="2025-09-09T05:35:42.161056576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:42.163238 containerd[1536]: time="2025-09-09T05:35:42.163197020Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 05:35:42.164579 containerd[1536]: time="2025-09-09T05:35:42.164230255Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:42.168179 containerd[1536]: time="2025-09-09T05:35:42.168123500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:42.169172 containerd[1536]: time="2025-09-09T05:35:42.169127540Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.656870442s" Sep 9 05:35:42.169320 containerd[1536]: time="2025-09-09T05:35:42.169299779Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 05:35:42.170132 containerd[1536]: time="2025-09-09T05:35:42.169918991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 05:35:43.426978 containerd[1536]: time="2025-09-09T05:35:43.426896975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:43.428346 containerd[1536]: time="2025-09-09T05:35:43.428302684Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 05:35:43.429677 containerd[1536]: time="2025-09-09T05:35:43.429209230Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:43.432284 containerd[1536]: time="2025-09-09T05:35:43.432233614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:43.433269 containerd[1536]: time="2025-09-09T05:35:43.433229359Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.263277174s" Sep 9 05:35:43.433269 containerd[1536]: time="2025-09-09T05:35:43.433270433Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 05:35:43.433904 containerd[1536]: time="2025-09-09T05:35:43.433855943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 05:35:43.532187 systemd-resolved[1397]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 9 05:35:44.686555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917323451.mount: Deactivated successfully. Sep 9 05:35:45.305675 containerd[1536]: time="2025-09-09T05:35:45.305615671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:45.307446 containerd[1536]: time="2025-09-09T05:35:45.307404954Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 05:35:45.308105 containerd[1536]: time="2025-09-09T05:35:45.308061420Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:45.310534 containerd[1536]: time="2025-09-09T05:35:45.310466150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:45.312421 containerd[1536]: time="2025-09-09T05:35:45.312073634Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.878184879s" Sep 9 05:35:45.312421 containerd[1536]: time="2025-09-09T05:35:45.312112911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 05:35:45.312779 containerd[1536]: time="2025-09-09T05:35:45.312748942Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:35:45.838209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051870089.mount: Deactivated successfully. Sep 9 05:35:46.494481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:35:46.498578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:46.619762 systemd-resolved[1397]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 9 05:35:46.730309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:46.744174 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:35:46.828429 kubelet[2178]: E0909 05:35:46.828265 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:35:46.831462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:35:46.832381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:35:46.834160 systemd[1]: kubelet.service: Consumed 251ms CPU time, 110.6M memory peak. Sep 9 05:35:47.157195 containerd[1536]: time="2025-09-09T05:35:47.157003619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:47.158864 containerd[1536]: time="2025-09-09T05:35:47.158735397Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 05:35:47.160052 containerd[1536]: time="2025-09-09T05:35:47.159797016Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:47.164138 containerd[1536]: time="2025-09-09T05:35:47.164076836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:47.165985 containerd[1536]: time="2025-09-09T05:35:47.165650819Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.852859524s" Sep 9 05:35:47.165985 containerd[1536]: time="2025-09-09T05:35:47.165703181Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 05:35:47.166258 containerd[1536]: time="2025-09-09T05:35:47.166213978Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:35:47.569938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497406613.mount: Deactivated successfully. Sep 9 05:35:47.575590 containerd[1536]: time="2025-09-09T05:35:47.575234329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:47.577425 containerd[1536]: time="2025-09-09T05:35:47.577378834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 05:35:47.578669 containerd[1536]: time="2025-09-09T05:35:47.577810128Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:47.581779 containerd[1536]: time="2025-09-09T05:35:47.581720846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:35:47.583494 containerd[1536]: time="2025-09-09T05:35:47.583336987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 417.093098ms" Sep 9 05:35:47.583494 containerd[1536]: time="2025-09-09T05:35:47.583380993Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:35:47.583917 containerd[1536]: time="2025-09-09T05:35:47.583891252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 05:35:48.087446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4278188130.mount: Deactivated successfully. Sep 9 05:35:50.271562 containerd[1536]: time="2025-09-09T05:35:50.271470117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:50.272741 containerd[1536]: time="2025-09-09T05:35:50.272693101Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 05:35:50.273579 containerd[1536]: time="2025-09-09T05:35:50.273525310Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:50.276356 containerd[1536]: time="2025-09-09T05:35:50.276320815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:35:50.277691 containerd[1536]: time="2025-09-09T05:35:50.277650905Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.693722773s" Sep 9 05:35:50.277691 containerd[1536]: time="2025-09-09T05:35:50.277693841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 05:35:53.692080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:53.692251 systemd[1]: kubelet.service: Consumed 251ms CPU time, 110.6M memory peak. Sep 9 05:35:53.694885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:53.730830 systemd[1]: Reload requested from client PID 2269 ('systemctl') (unit session-9.scope)... Sep 9 05:35:53.730846 systemd[1]: Reloading... Sep 9 05:35:53.873417 zram_generator::config[2308]: No configuration found. Sep 9 05:35:54.218646 systemd[1]: Reloading finished in 487 ms. Sep 9 05:35:54.278065 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:35:54.278153 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:35:54.278795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:54.278873 systemd[1]: kubelet.service: Consumed 134ms CPU time, 98M memory peak. Sep 9 05:35:54.282032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:35:54.463642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:35:54.474978 (kubelet)[2367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:35:54.532306 kubelet[2367]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:35:54.532306 kubelet[2367]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:35:54.532306 kubelet[2367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:35:54.532924 kubelet[2367]: I0909 05:35:54.532409 2367 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:35:54.747533 kubelet[2367]: I0909 05:35:54.747482 2367 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:35:54.747779 kubelet[2367]: I0909 05:35:54.747764 2367 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:35:54.748217 kubelet[2367]: I0909 05:35:54.748192 2367 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:35:54.786278 kubelet[2367]: E0909 05:35:54.786194 2367 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://24.199.106.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:54.786623 kubelet[2367]: I0909 05:35:54.786595 2367 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:35:54.799965 kubelet[2367]: I0909 05:35:54.799912 2367 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:35:54.804580 kubelet[2367]: I0909 05:35:54.804051 2367 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:35:54.804580 kubelet[2367]: I0909 05:35:54.804334 2367 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:35:54.804868 kubelet[2367]: I0909 05:35:54.804403 2367 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452.0.0-n-41a4a07365","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:35:54.805058 kubelet[2367]: I0909 05:35:54.805043 2367 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:35:54.805107 kubelet[2367]: I0909 05:35:54.805101 2367 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:35:54.805405 kubelet[2367]: I0909 05:35:54.805379 2367 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:35:54.809828 kubelet[2367]: I0909 05:35:54.809784 2367 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:35:54.810070 kubelet[2367]: I0909 05:35:54.810052 2367 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:35:54.810198 kubelet[2367]: I0909 05:35:54.810184 2367 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:35:54.810269 kubelet[2367]: I0909 05:35:54.810259 2367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:35:54.815911 kubelet[2367]: W0909 05:35:54.814525 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.106.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-41a4a07365&limit=500&resourceVersion=0": dial tcp 24.199.106.51:6443: connect: connection refused Sep 9 05:35:54.815911 kubelet[2367]: E0909 05:35:54.814687 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.106.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-41a4a07365&limit=500&resourceVersion=0\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:54.815911 kubelet[2367]: W0909 05:35:54.815531 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.106.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.106.51:6443: connect: connection refused Sep 9 05:35:54.815911 kubelet[2367]: E0909 05:35:54.815621 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.106.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:54.817294 kubelet[2367]: I0909 05:35:54.817232 2367 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:35:54.821100 kubelet[2367]: I0909 05:35:54.821010 2367 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:35:54.821806 kubelet[2367]: W0909 05:35:54.821773 2367 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:35:54.825366 kubelet[2367]: I0909 05:35:54.824995 2367 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:35:54.825366 kubelet[2367]: I0909 05:35:54.825055 2367 server.go:1287] "Started kubelet" Sep 9 05:35:54.841350 kubelet[2367]: I0909 05:35:54.838709 2367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:35:54.842712 kubelet[2367]: I0909 05:35:54.842052 2367 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:35:54.843775 kubelet[2367]: I0909 05:35:54.843729 2367 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:35:54.845581 kubelet[2367]: E0909 05:35:54.842470 2367 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.106.51:6443/api/v1/namespaces/default/events\": dial tcp 24.199.106.51:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452.0.0-n-41a4a07365.1863867de1fbbff9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452.0.0-n-41a4a07365,UID:ci-4452.0.0-n-41a4a07365,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452.0.0-n-41a4a07365,},FirstTimestamp:2025-09-09 05:35:54.825031673 +0000 UTC m=+0.345223688,LastTimestamp:2025-09-09 05:35:54.825031673 +0000 UTC m=+0.345223688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452.0.0-n-41a4a07365,}" Sep 9 05:35:54.848063 kubelet[2367]: I0909 05:35:54.847992 2367 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:35:54.848809 kubelet[2367]: I0909 05:35:54.848516 2367 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:35:54.850058 kubelet[2367]: I0909 05:35:54.849790 2367 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:35:54.850695 kubelet[2367]: I0909 05:35:54.850626 2367 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:35:54.850915 kubelet[2367]: E0909 05:35:54.850878 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-41a4a07365\" not found" Sep 9 05:35:54.853523 kubelet[2367]: E0909 05:35:54.852699 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.106.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-41a4a07365?timeout=10s\": dial tcp 24.199.106.51:6443: connect: connection refused" interval="200ms" Sep 9 05:35:54.854586 kubelet[2367]: I0909 05:35:54.853744 2367 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:35:54.854586 kubelet[2367]: I0909 05:35:54.853816 2367 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:35:54.854586 kubelet[2367]: W0909 05:35:54.854384 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.106.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.106.51:6443: connect: connection refused Sep 9 05:35:54.854586 kubelet[2367]: E0909 05:35:54.854451 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.106.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:54.855758 kubelet[2367]: I0909 05:35:54.855507 2367 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:35:54.856189 kubelet[2367]: E0909 05:35:54.855906 2367 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:35:54.856649 kubelet[2367]: I0909 05:35:54.856239 2367 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:35:54.858982 kubelet[2367]: I0909 05:35:54.858963 2367 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:35:54.877580 kubelet[2367]: I0909 05:35:54.877339 2367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:35:54.878834 kubelet[2367]: I0909 05:35:54.878806 2367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:35:54.878834 kubelet[2367]: I0909 05:35:54.878837 2367 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:35:54.878963 kubelet[2367]: I0909 05:35:54.878867 2367 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:35:54.878963 kubelet[2367]: I0909 05:35:54.878874 2367 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:35:54.878963 kubelet[2367]: E0909 05:35:54.878928 2367 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:35:54.888009 kubelet[2367]: W0909 05:35:54.887943 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.106.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.106.51:6443: connect: connection refused Sep 9 05:35:54.888250 kubelet[2367]: E0909 05:35:54.888208 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.106.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:54.896176 kubelet[2367]: I0909 05:35:54.896136 2367 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:35:54.896176 kubelet[2367]: I0909 05:35:54.896162 2367 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:35:54.896390 kubelet[2367]: I0909 05:35:54.896199 2367 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:35:54.899734 kubelet[2367]: I0909 05:35:54.899688 2367 policy_none.go:49] "None policy: Start" Sep 9 05:35:54.899734 kubelet[2367]: I0909 05:35:54.899725 2367 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:35:54.899734 kubelet[2367]: I0909 05:35:54.899739 2367 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:35:54.909418 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:35:54.922394 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:35:54.928060 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:35:54.941075 kubelet[2367]: I0909 05:35:54.941030 2367 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:35:54.941493 kubelet[2367]: I0909 05:35:54.941402 2367 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:35:54.941493 kubelet[2367]: I0909 05:35:54.941430 2367 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:35:54.943622 kubelet[2367]: I0909 05:35:54.943275 2367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:35:54.944037 kubelet[2367]: E0909 05:35:54.944012 2367 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:35:54.944207 kubelet[2367]: E0909 05:35:54.944183 2367 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4452.0.0-n-41a4a07365\" not found" Sep 9 05:35:54.994654 systemd[1]: Created slice kubepods-burstable-pod0803a08c82825f6f98823431423d3300.slice - libcontainer container kubepods-burstable-pod0803a08c82825f6f98823431423d3300.slice. Sep 9 05:35:55.008659 kubelet[2367]: E0909 05:35:55.007567 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.014310 systemd[1]: Created slice kubepods-burstable-pod49c04f1a5b5ca7aedf8f0c92fc95945f.slice - libcontainer container kubepods-burstable-pod49c04f1a5b5ca7aedf8f0c92fc95945f.slice. Sep 9 05:35:55.023577 kubelet[2367]: E0909 05:35:55.023509 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.027222 systemd[1]: Created slice kubepods-burstable-pod5fa92e4ba6e1d30200e71d199f9da263.slice - libcontainer container kubepods-burstable-pod5fa92e4ba6e1d30200e71d199f9da263.slice. Sep 9 05:35:55.030574 kubelet[2367]: E0909 05:35:55.030466 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.043517 kubelet[2367]: I0909 05:35:55.043473 2367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.044182 kubelet[2367]: E0909 05:35:55.044145 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.106.51:6443/api/v1/nodes\": dial tcp 24.199.106.51:6443: connect: connection refused" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.054430 kubelet[2367]: E0909 05:35:55.054373 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.106.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-41a4a07365?timeout=10s\": dial tcp 24.199.106.51:6443: connect: connection refused" interval="400ms" Sep 9 05:35:55.055918 kubelet[2367]: I0909 05:35:55.055861 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0803a08c82825f6f98823431423d3300-ca-certs\") pod \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" (UID: \"0803a08c82825f6f98823431423d3300\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.056102 kubelet[2367]: I0909 05:35:55.056040 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-ca-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.056102 kubelet[2367]: I0909 05:35:55.056099 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-flexvolume-dir\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.056417 kubelet[2367]: I0909 05:35:55.056125 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-k8s-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.056417 kubelet[2367]: I0909 05:35:55.056150 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-kubeconfig\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.056417 kubelet[2367]: I0909 05:35:55.056177 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.056417 kubelet[2367]: I0909 05:35:55.056225 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0803a08c82825f6f98823431423d3300-k8s-certs\") pod \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" (UID: \"0803a08c82825f6f98823431423d3300\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.056417 kubelet[2367]: I0909 05:35:55.056263 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0803a08c82825f6f98823431423d3300-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" (UID: \"0803a08c82825f6f98823431423d3300\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.056561 kubelet[2367]: I0909 05:35:55.056297 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fa92e4ba6e1d30200e71d199f9da263-kubeconfig\") pod \"kube-scheduler-ci-4452.0.0-n-41a4a07365\" (UID: \"5fa92e4ba6e1d30200e71d199f9da263\") " pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.247598 kubelet[2367]: I0909 05:35:55.246351 2367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.247598 kubelet[2367]: E0909 05:35:55.247315 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.106.51:6443/api/v1/nodes\": dial tcp 24.199.106.51:6443: connect: connection refused" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.309114 kubelet[2367]: E0909 05:35:55.308969 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:55.314474 containerd[1536]: time="2025-09-09T05:35:55.314374911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452.0.0-n-41a4a07365,Uid:0803a08c82825f6f98823431423d3300,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:55.324443 kubelet[2367]: E0909 05:35:55.324367 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:55.332576 kubelet[2367]: E0909 05:35:55.332482 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:55.332766 containerd[1536]: time="2025-09-09T05:35:55.332716839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452.0.0-n-41a4a07365,Uid:49c04f1a5b5ca7aedf8f0c92fc95945f,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:55.333564 containerd[1536]: time="2025-09-09T05:35:55.333499976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452.0.0-n-41a4a07365,Uid:5fa92e4ba6e1d30200e71d199f9da263,Namespace:kube-system,Attempt:0,}" Sep 9 05:35:55.452594 containerd[1536]: time="2025-09-09T05:35:55.452477658Z" level=info msg="connecting to shim 1e312629d70e304fa175a5afb0dfc9f32c5c622dbf58bee8db178d827477be83" address="unix:///run/containerd/s/88f381f8f82bb73b9c84557295d767c27b33751682fae2d87e633e6d0855a3e7" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:55.456173 kubelet[2367]: E0909 05:35:55.456114 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.199.106.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4452.0.0-n-41a4a07365?timeout=10s\": dial tcp 24.199.106.51:6443: connect: connection refused" interval="800ms" Sep 9 05:35:55.473259 containerd[1536]: time="2025-09-09T05:35:55.473200672Z" level=info msg="connecting to shim 3bf269bb5f80c5c05117fe689885659a94ba17de4e7cdddff7dd0354edeb6a24" address="unix:///run/containerd/s/fec94e5e265ccb1a2179c3f67d2a67cf1a10024b3b9f8debab84e17a5ef988cd" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:55.480372 containerd[1536]: time="2025-09-09T05:35:55.480321772Z" level=info msg="connecting to shim 89321a066357b513a48376f66d5828a2b0ff70fe32ffd26ccf7ba2ee7d0c526d" address="unix:///run/containerd/s/36f699e4138d41ad24d53aa9d20cad09eb2c971020f28de7d1e4a6b5e15b55bf" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:35:55.587905 kubelet[2367]: E0909 05:35:55.586378 2367 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.199.106.51:6443/api/v1/namespaces/default/events\": dial tcp 24.199.106.51:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4452.0.0-n-41a4a07365.1863867de1fbbff9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4452.0.0-n-41a4a07365,UID:ci-4452.0.0-n-41a4a07365,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4452.0.0-n-41a4a07365,},FirstTimestamp:2025-09-09 05:35:54.825031673 +0000 UTC m=+0.345223688,LastTimestamp:2025-09-09 05:35:54.825031673 +0000 UTC m=+0.345223688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4452.0.0-n-41a4a07365,}" Sep 9 05:35:55.612929 systemd[1]: Started cri-containerd-1e312629d70e304fa175a5afb0dfc9f32c5c622dbf58bee8db178d827477be83.scope - libcontainer container 1e312629d70e304fa175a5afb0dfc9f32c5c622dbf58bee8db178d827477be83. Sep 9 05:35:55.616343 systemd[1]: Started cri-containerd-89321a066357b513a48376f66d5828a2b0ff70fe32ffd26ccf7ba2ee7d0c526d.scope - libcontainer container 89321a066357b513a48376f66d5828a2b0ff70fe32ffd26ccf7ba2ee7d0c526d. Sep 9 05:35:55.621665 systemd[1]: Started cri-containerd-3bf269bb5f80c5c05117fe689885659a94ba17de4e7cdddff7dd0354edeb6a24.scope - libcontainer container 3bf269bb5f80c5c05117fe689885659a94ba17de4e7cdddff7dd0354edeb6a24. Sep 9 05:35:55.651216 kubelet[2367]: I0909 05:35:55.651178 2367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.651610 kubelet[2367]: E0909 05:35:55.651572 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.199.106.51:6443/api/v1/nodes\": dial tcp 24.199.106.51:6443: connect: connection refused" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:55.698998 kubelet[2367]: W0909 05:35:55.698918 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://24.199.106.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 24.199.106.51:6443: connect: connection refused Sep 9 05:35:55.698998 kubelet[2367]: E0909 05:35:55.699001 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://24.199.106.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:55.708111 kubelet[2367]: W0909 05:35:55.707999 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://24.199.106.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 24.199.106.51:6443: connect: connection refused Sep 9 05:35:55.708356 kubelet[2367]: E0909 05:35:55.708116 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://24.199.106.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:55.772500 containerd[1536]: time="2025-09-09T05:35:55.772234906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4452.0.0-n-41a4a07365,Uid:5fa92e4ba6e1d30200e71d199f9da263,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bf269bb5f80c5c05117fe689885659a94ba17de4e7cdddff7dd0354edeb6a24\"" Sep 9 05:35:55.772832 containerd[1536]: time="2025-09-09T05:35:55.772766463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4452.0.0-n-41a4a07365,Uid:0803a08c82825f6f98823431423d3300,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e312629d70e304fa175a5afb0dfc9f32c5c622dbf58bee8db178d827477be83\"" Sep 9 05:35:55.774668 kubelet[2367]: E0909 05:35:55.774628 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:55.776947 containerd[1536]: time="2025-09-09T05:35:55.776571056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4452.0.0-n-41a4a07365,Uid:49c04f1a5b5ca7aedf8f0c92fc95945f,Namespace:kube-system,Attempt:0,} returns sandbox id \"89321a066357b513a48376f66d5828a2b0ff70fe32ffd26ccf7ba2ee7d0c526d\"" Sep 9 05:35:55.777054 kubelet[2367]: E0909 05:35:55.776786 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:55.777683 kubelet[2367]: E0909 05:35:55.777630 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:55.778737 containerd[1536]: time="2025-09-09T05:35:55.778373288Z" level=info msg="CreateContainer within sandbox \"3bf269bb5f80c5c05117fe689885659a94ba17de4e7cdddff7dd0354edeb6a24\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:35:55.781566 containerd[1536]: time="2025-09-09T05:35:55.781455409Z" level=info msg="CreateContainer within sandbox \"1e312629d70e304fa175a5afb0dfc9f32c5c622dbf58bee8db178d827477be83\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:35:55.784353 containerd[1536]: time="2025-09-09T05:35:55.783734607Z" level=info msg="CreateContainer within sandbox \"89321a066357b513a48376f66d5828a2b0ff70fe32ffd26ccf7ba2ee7d0c526d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:35:55.795778 containerd[1536]: time="2025-09-09T05:35:55.795712372Z" level=info msg="Container d3242274fdce226492f3629517fc214540abd9b5e4edbbabab82ba6bc5256d40: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:55.799954 containerd[1536]: time="2025-09-09T05:35:55.799739731Z" level=info msg="Container 4c8170379f37395fdda527823b8ded509437caf4b0ef229ededa29d6f092c455: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:55.803516 containerd[1536]: time="2025-09-09T05:35:55.803400810Z" level=info msg="Container fa86c75d9003d72c5b5515caf808a7dcedae4011a842da53442f9f10fa329be5: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:35:55.827044 containerd[1536]: time="2025-09-09T05:35:55.826965685Z" level=info msg="CreateContainer within sandbox \"1e312629d70e304fa175a5afb0dfc9f32c5c622dbf58bee8db178d827477be83\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fa86c75d9003d72c5b5515caf808a7dcedae4011a842da53442f9f10fa329be5\"" Sep 9 05:35:55.828284 containerd[1536]: time="2025-09-09T05:35:55.828232393Z" level=info msg="CreateContainer within sandbox \"3bf269bb5f80c5c05117fe689885659a94ba17de4e7cdddff7dd0354edeb6a24\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d3242274fdce226492f3629517fc214540abd9b5e4edbbabab82ba6bc5256d40\"" Sep 9 05:35:55.829502 containerd[1536]: time="2025-09-09T05:35:55.829402394Z" level=info msg="CreateContainer within sandbox \"89321a066357b513a48376f66d5828a2b0ff70fe32ffd26ccf7ba2ee7d0c526d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4c8170379f37395fdda527823b8ded509437caf4b0ef229ededa29d6f092c455\"" Sep 9 05:35:55.830700 containerd[1536]: time="2025-09-09T05:35:55.830412658Z" level=info msg="StartContainer for \"d3242274fdce226492f3629517fc214540abd9b5e4edbbabab82ba6bc5256d40\"" Sep 9 05:35:55.830763 kubelet[2367]: W0909 05:35:55.830511 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://24.199.106.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-41a4a07365&limit=500&resourceVersion=0": dial tcp 24.199.106.51:6443: connect: connection refused Sep 9 05:35:55.830763 kubelet[2367]: E0909 05:35:55.830625 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://24.199.106.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4452.0.0-n-41a4a07365&limit=500&resourceVersion=0\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:55.831074 containerd[1536]: time="2025-09-09T05:35:55.831038146Z" level=info msg="StartContainer for \"4c8170379f37395fdda527823b8ded509437caf4b0ef229ededa29d6f092c455\"" Sep 9 05:35:55.831942 containerd[1536]: time="2025-09-09T05:35:55.831907867Z" level=info msg="StartContainer for \"fa86c75d9003d72c5b5515caf808a7dcedae4011a842da53442f9f10fa329be5\"" Sep 9 05:35:55.833283 containerd[1536]: time="2025-09-09T05:35:55.833246492Z" level=info msg="connecting to shim fa86c75d9003d72c5b5515caf808a7dcedae4011a842da53442f9f10fa329be5" address="unix:///run/containerd/s/88f381f8f82bb73b9c84557295d767c27b33751682fae2d87e633e6d0855a3e7" protocol=ttrpc version=3 Sep 9 05:35:55.835313 containerd[1536]: time="2025-09-09T05:35:55.835259524Z" level=info msg="connecting to shim d3242274fdce226492f3629517fc214540abd9b5e4edbbabab82ba6bc5256d40" address="unix:///run/containerd/s/fec94e5e265ccb1a2179c3f67d2a67cf1a10024b3b9f8debab84e17a5ef988cd" protocol=ttrpc version=3 Sep 9 05:35:55.838891 containerd[1536]: time="2025-09-09T05:35:55.833253389Z" level=info msg="connecting to shim 4c8170379f37395fdda527823b8ded509437caf4b0ef229ededa29d6f092c455" address="unix:///run/containerd/s/36f699e4138d41ad24d53aa9d20cad09eb2c971020f28de7d1e4a6b5e15b55bf" protocol=ttrpc version=3 Sep 9 05:35:55.874989 systemd[1]: Started cri-containerd-4c8170379f37395fdda527823b8ded509437caf4b0ef229ededa29d6f092c455.scope - libcontainer container 4c8170379f37395fdda527823b8ded509437caf4b0ef229ededa29d6f092c455. Sep 9 05:35:55.886764 systemd[1]: Started cri-containerd-d3242274fdce226492f3629517fc214540abd9b5e4edbbabab82ba6bc5256d40.scope - libcontainer container d3242274fdce226492f3629517fc214540abd9b5e4edbbabab82ba6bc5256d40. Sep 9 05:35:55.888806 systemd[1]: Started cri-containerd-fa86c75d9003d72c5b5515caf808a7dcedae4011a842da53442f9f10fa329be5.scope - libcontainer container fa86c75d9003d72c5b5515caf808a7dcedae4011a842da53442f9f10fa329be5. Sep 9 05:35:56.029909 containerd[1536]: time="2025-09-09T05:35:56.029852370Z" level=info msg="StartContainer for \"fa86c75d9003d72c5b5515caf808a7dcedae4011a842da53442f9f10fa329be5\" returns successfully" Sep 9 05:35:56.045521 containerd[1536]: time="2025-09-09T05:35:56.045283739Z" level=info msg="StartContainer for \"4c8170379f37395fdda527823b8ded509437caf4b0ef229ededa29d6f092c455\" returns successfully" Sep 9 05:35:56.063171 kubelet[2367]: W0909 05:35:56.063020 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://24.199.106.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 24.199.106.51:6443: connect: connection refused Sep 9 05:35:56.063171 kubelet[2367]: E0909 05:35:56.063124 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://24.199.106.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.199.106.51:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:35:56.082393 containerd[1536]: time="2025-09-09T05:35:56.082307306Z" level=info msg="StartContainer for \"d3242274fdce226492f3629517fc214540abd9b5e4edbbabab82ba6bc5256d40\" returns successfully" Sep 9 05:35:56.453796 kubelet[2367]: I0909 05:35:56.453731 2367 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:56.936697 kubelet[2367]: E0909 05:35:56.936488 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:56.939078 kubelet[2367]: E0909 05:35:56.938823 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:56.941434 kubelet[2367]: E0909 05:35:56.940863 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:56.941434 kubelet[2367]: E0909 05:35:56.941051 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:56.949450 kubelet[2367]: E0909 05:35:56.949094 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:56.950002 kubelet[2367]: E0909 05:35:56.949962 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:57.950516 kubelet[2367]: E0909 05:35:57.950284 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:57.953351 kubelet[2367]: E0909 05:35:57.952949 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:57.953351 kubelet[2367]: E0909 05:35:57.953216 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:57.954585 kubelet[2367]: E0909 05:35:57.953870 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:57.954898 kubelet[2367]: E0909 05:35:57.951772 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:57.955194 kubelet[2367]: E0909 05:35:57.955170 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:58.505605 kubelet[2367]: E0909 05:35:58.505527 2367 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4452.0.0-n-41a4a07365\" not found" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.645726 kubelet[2367]: I0909 05:35:58.645650 2367 kubelet_node_status.go:78] "Successfully registered node" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.645726 kubelet[2367]: E0909 05:35:58.645724 2367 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4452.0.0-n-41a4a07365\": node \"ci-4452.0.0-n-41a4a07365\" not found" Sep 9 05:35:58.652219 kubelet[2367]: I0909 05:35:58.651645 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.683049 kubelet[2367]: E0909 05:35:58.683007 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.683252 kubelet[2367]: I0909 05:35:58.683227 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.687522 kubelet[2367]: E0909 05:35:58.687468 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.687856 kubelet[2367]: I0909 05:35:58.687802 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.690809 kubelet[2367]: E0909 05:35:58.690760 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4452.0.0-n-41a4a07365\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.817717 kubelet[2367]: I0909 05:35:58.816748 2367 apiserver.go:52] "Watching apiserver" Sep 9 05:35:58.854069 kubelet[2367]: I0909 05:35:58.853928 2367 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:35:58.950179 kubelet[2367]: I0909 05:35:58.950099 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.950753 kubelet[2367]: I0909 05:35:58.950628 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.955230 kubelet[2367]: E0909 05:35:58.955165 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4452.0.0-n-41a4a07365\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.955495 kubelet[2367]: E0909 05:35:58.955463 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:35:58.955626 kubelet[2367]: E0909 05:35:58.955165 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:35:58.955885 kubelet[2367]: E0909 05:35:58.955856 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:00.013024 kubelet[2367]: I0909 05:36:00.012974 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:00.032006 kubelet[2367]: W0909 05:36:00.031735 2367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:36:00.033212 kubelet[2367]: E0909 05:36:00.032932 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:00.494479 kubelet[2367]: I0909 05:36:00.494451 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:00.507316 kubelet[2367]: W0909 05:36:00.507236 2367 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:36:00.508727 kubelet[2367]: E0909 05:36:00.508686 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:00.952774 systemd[1]: Reload requested from client PID 2635 ('systemctl') (unit session-9.scope)... Sep 9 05:36:00.952803 systemd[1]: Reloading... Sep 9 05:36:00.955116 kubelet[2367]: E0909 05:36:00.954720 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:00.955166 kubelet[2367]: E0909 05:36:00.955134 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:01.100712 zram_generator::config[2681]: No configuration found. Sep 9 05:36:01.456822 systemd[1]: Reloading finished in 503 ms. Sep 9 05:36:01.493705 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:01.514514 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:36:01.514948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:01.515021 systemd[1]: kubelet.service: Consumed 885ms CPU time, 125.4M memory peak. Sep 9 05:36:01.520632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:36:01.795777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:36:01.809991 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:36:01.909143 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:36:01.911609 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 05:36:01.911609 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:36:01.913722 kubelet[2729]: I0909 05:36:01.912872 2729 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:36:01.931838 kubelet[2729]: I0909 05:36:01.931757 2729 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 05:36:01.932401 kubelet[2729]: I0909 05:36:01.931918 2729 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:36:01.933408 kubelet[2729]: I0909 05:36:01.933214 2729 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 05:36:01.941633 kubelet[2729]: I0909 05:36:01.940649 2729 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:36:01.951211 kubelet[2729]: I0909 05:36:01.951163 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:36:01.965804 kubelet[2729]: I0909 05:36:01.965773 2729 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:36:01.972043 kubelet[2729]: I0909 05:36:01.971982 2729 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:36:01.972614 kubelet[2729]: I0909 05:36:01.972535 2729 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:36:01.973039 kubelet[2729]: I0909 05:36:01.972753 2729 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4452.0.0-n-41a4a07365","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:36:01.973291 kubelet[2729]: I0909 05:36:01.973275 2729 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:36:01.973349 kubelet[2729]: I0909 05:36:01.973341 2729 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 05:36:01.973447 kubelet[2729]: I0909 05:36:01.973438 2729 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:36:01.973768 kubelet[2729]: I0909 05:36:01.973748 2729 kubelet.go:446] "Attempting to sync node with API server" Sep 9 05:36:01.974516 kubelet[2729]: I0909 05:36:01.974489 2729 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:36:01.974693 kubelet[2729]: I0909 05:36:01.974679 2729 kubelet.go:352] "Adding apiserver pod source" Sep 9 05:36:01.974799 kubelet[2729]: I0909 05:36:01.974785 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:36:01.977740 kubelet[2729]: I0909 05:36:01.977700 2729 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:36:01.980597 kubelet[2729]: I0909 05:36:01.978536 2729 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:36:01.980597 kubelet[2729]: I0909 05:36:01.980470 2729 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 05:36:01.980597 kubelet[2729]: I0909 05:36:01.980520 2729 server.go:1287] "Started kubelet" Sep 9 05:36:01.991215 kubelet[2729]: I0909 05:36:01.990947 2729 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:36:01.997380 kubelet[2729]: I0909 05:36:01.997134 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:36:01.998722 kubelet[2729]: I0909 05:36:01.998401 2729 server.go:479] "Adding debug handlers to kubelet server" Sep 9 05:36:02.000051 kubelet[2729]: I0909 05:36:01.999781 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:36:02.000166 kubelet[2729]: I0909 05:36:02.000117 2729 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:36:02.015412 kubelet[2729]: I0909 05:36:02.015349 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:36:02.035017 kubelet[2729]: I0909 05:36:02.015828 2729 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 05:36:02.035991 kubelet[2729]: I0909 05:36:02.015846 2729 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 05:36:02.035991 kubelet[2729]: E0909 05:36:02.016093 2729 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4452.0.0-n-41a4a07365\" not found" Sep 9 05:36:02.039679 kubelet[2729]: I0909 05:36:02.039599 2729 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:36:02.050874 kubelet[2729]: I0909 05:36:02.047305 2729 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:36:02.070512 kubelet[2729]: E0909 05:36:02.070407 2729 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:36:02.078103 kubelet[2729]: I0909 05:36:02.077022 2729 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:36:02.079314 kubelet[2729]: I0909 05:36:02.078913 2729 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:36:02.107569 kubelet[2729]: I0909 05:36:02.106979 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:36:02.115155 kubelet[2729]: I0909 05:36:02.114929 2729 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:36:02.116393 kubelet[2729]: I0909 05:36:02.116357 2729 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 05:36:02.116393 kubelet[2729]: I0909 05:36:02.116570 2729 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 05:36:02.116393 kubelet[2729]: I0909 05:36:02.116581 2729 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 05:36:02.119769 kubelet[2729]: E0909 05:36:02.118143 2729 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:36:02.217395 kubelet[2729]: I0909 05:36:02.217354 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 05:36:02.218355 kubelet[2729]: I0909 05:36:02.217718 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 05:36:02.218355 kubelet[2729]: I0909 05:36:02.217765 2729 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:36:02.218355 kubelet[2729]: I0909 05:36:02.218069 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:36:02.218355 kubelet[2729]: I0909 05:36:02.218084 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:36:02.218355 kubelet[2729]: I0909 05:36:02.218113 2729 policy_none.go:49] "None policy: Start" Sep 9 05:36:02.218355 kubelet[2729]: I0909 05:36:02.218142 2729 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 05:36:02.218355 kubelet[2729]: I0909 05:36:02.218158 2729 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:36:02.218841 kubelet[2729]: E0909 05:36:02.218799 2729 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:36:02.219016 kubelet[2729]: I0909 05:36:02.219001 2729 state_mem.go:75] "Updated machine memory state" Sep 9 05:36:02.263578 kubelet[2729]: I0909 05:36:02.263496 2729 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:36:02.264329 kubelet[2729]: I0909 05:36:02.264306 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:36:02.265646 kubelet[2729]: I0909 05:36:02.265128 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:36:02.269296 kubelet[2729]: I0909 05:36:02.269251 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:36:02.273242 kubelet[2729]: E0909 05:36:02.272162 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 05:36:02.379694 kubelet[2729]: I0909 05:36:02.379527 2729 kubelet_node_status.go:75] "Attempting to register node" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.399929 kubelet[2729]: I0909 05:36:02.399869 2729 kubelet_node_status.go:124] "Node was previously registered" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.400319 kubelet[2729]: I0909 05:36:02.400244 2729 kubelet_node_status.go:78] "Successfully registered node" node="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.423283 kubelet[2729]: I0909 05:36:02.422988 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.425505 kubelet[2729]: I0909 05:36:02.425434 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.426580 kubelet[2729]: I0909 05:36:02.426331 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443366 kubelet[2729]: I0909 05:36:02.442813 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0803a08c82825f6f98823431423d3300-ca-certs\") pod \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" (UID: \"0803a08c82825f6f98823431423d3300\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443366 kubelet[2729]: I0909 05:36:02.442911 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0803a08c82825f6f98823431423d3300-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" (UID: \"0803a08c82825f6f98823431423d3300\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443366 kubelet[2729]: I0909 05:36:02.442948 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-kubeconfig\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443366 kubelet[2729]: I0909 05:36:02.442994 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443366 kubelet[2729]: I0909 05:36:02.443029 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fa92e4ba6e1d30200e71d199f9da263-kubeconfig\") pod \"kube-scheduler-ci-4452.0.0-n-41a4a07365\" (UID: \"5fa92e4ba6e1d30200e71d199f9da263\") " pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443817 kubelet[2729]: I0909 05:36:02.443059 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0803a08c82825f6f98823431423d3300-k8s-certs\") pod \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" (UID: \"0803a08c82825f6f98823431423d3300\") " pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443817 kubelet[2729]: I0909 05:36:02.443087 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-ca-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443817 kubelet[2729]: I0909 05:36:02.443152 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-flexvolume-dir\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.443817 kubelet[2729]: I0909 05:36:02.443185 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49c04f1a5b5ca7aedf8f0c92fc95945f-k8s-certs\") pod \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" (UID: \"49c04f1a5b5ca7aedf8f0c92fc95945f\") " pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.451391 kubelet[2729]: W0909 05:36:02.451187 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:36:02.452269 kubelet[2729]: E0909 05:36:02.452189 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4452.0.0-n-41a4a07365\" already exists" pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.455431 kubelet[2729]: W0909 05:36:02.455201 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:36:02.455431 kubelet[2729]: E0909 05:36:02.455309 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4452.0.0-n-41a4a07365\" already exists" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:02.460926 kubelet[2729]: W0909 05:36:02.460772 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:36:02.753771 kubelet[2729]: E0909 05:36:02.753341 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:02.757005 kubelet[2729]: E0909 05:36:02.756837 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:02.766099 kubelet[2729]: E0909 05:36:02.766025 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:02.976822 kubelet[2729]: I0909 05:36:02.976647 2729 apiserver.go:52] "Watching apiserver" Sep 9 05:36:03.036053 kubelet[2729]: I0909 05:36:03.035867 2729 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 05:36:03.134175 kubelet[2729]: I0909 05:36:03.133639 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" podStartSLOduration=1.133619457 podStartE2EDuration="1.133619457s" podCreationTimestamp="2025-09-09 05:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:03.133068198 +0000 UTC m=+1.315564922" watchObservedRunningTime="2025-09-09 05:36:03.133619457 +0000 UTC m=+1.316116185" Sep 9 05:36:03.175676 kubelet[2729]: I0909 05:36:03.174710 2729 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:03.175676 kubelet[2729]: E0909 05:36:03.174854 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:03.175676 kubelet[2729]: E0909 05:36:03.175330 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:03.182014 kubelet[2729]: I0909 05:36:03.181933 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4452.0.0-n-41a4a07365" podStartSLOduration=3.1819120180000002 podStartE2EDuration="3.181912018s" podCreationTimestamp="2025-09-09 05:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:03.159870297 +0000 UTC m=+1.342367021" watchObservedRunningTime="2025-09-09 05:36:03.181912018 +0000 UTC m=+1.364408725" Sep 9 05:36:03.182568 kubelet[2729]: I0909 05:36:03.182474 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4452.0.0-n-41a4a07365" podStartSLOduration=3.182458331 podStartE2EDuration="3.182458331s" podCreationTimestamp="2025-09-09 05:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:03.181713308 +0000 UTC m=+1.364210028" watchObservedRunningTime="2025-09-09 05:36:03.182458331 +0000 UTC m=+1.364955050" Sep 9 05:36:03.196254 kubelet[2729]: W0909 05:36:03.196203 2729 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 05:36:03.196451 kubelet[2729]: E0909 05:36:03.196313 2729 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4452.0.0-n-41a4a07365\" already exists" pod="kube-system/kube-apiserver-ci-4452.0.0-n-41a4a07365" Sep 9 05:36:03.196620 kubelet[2729]: E0909 05:36:03.196568 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:04.177434 kubelet[2729]: E0909 05:36:04.177359 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:04.177990 kubelet[2729]: E0909 05:36:04.177951 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:04.178306 kubelet[2729]: E0909 05:36:04.178252 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:06.955266 update_engine[1495]: I20250909 05:36:06.955142 1495 update_attempter.cc:509] Updating boot flags... Sep 9 05:36:07.254271 kubelet[2729]: I0909 05:36:07.254215 2729 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:36:07.259632 containerd[1536]: time="2025-09-09T05:36:07.259326906Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:36:07.260303 kubelet[2729]: I0909 05:36:07.260268 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:36:07.468518 kubelet[2729]: E0909 05:36:07.467693 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:07.931191 systemd[1]: Created slice kubepods-besteffort-pod6a273fd2_179e_4d72_a8c7_01c75c7e7cee.slice - libcontainer container kubepods-besteffort-pod6a273fd2_179e_4d72_a8c7_01c75c7e7cee.slice. Sep 9 05:36:07.992149 kubelet[2729]: I0909 05:36:07.992101 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a273fd2-179e-4d72-a8c7-01c75c7e7cee-lib-modules\") pod \"kube-proxy-jb22j\" (UID: \"6a273fd2-179e-4d72-a8c7-01c75c7e7cee\") " pod="kube-system/kube-proxy-jb22j" Sep 9 05:36:07.992364 kubelet[2729]: I0909 05:36:07.992192 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a273fd2-179e-4d72-a8c7-01c75c7e7cee-kube-proxy\") pod \"kube-proxy-jb22j\" (UID: \"6a273fd2-179e-4d72-a8c7-01c75c7e7cee\") " pod="kube-system/kube-proxy-jb22j" Sep 9 05:36:07.992364 kubelet[2729]: I0909 05:36:07.992260 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a273fd2-179e-4d72-a8c7-01c75c7e7cee-xtables-lock\") pod \"kube-proxy-jb22j\" (UID: \"6a273fd2-179e-4d72-a8c7-01c75c7e7cee\") " pod="kube-system/kube-proxy-jb22j" Sep 9 05:36:07.992364 kubelet[2729]: I0909 05:36:07.992284 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qk5\" (UniqueName: \"kubernetes.io/projected/6a273fd2-179e-4d72-a8c7-01c75c7e7cee-kube-api-access-77qk5\") pod \"kube-proxy-jb22j\" (UID: \"6a273fd2-179e-4d72-a8c7-01c75c7e7cee\") " pod="kube-system/kube-proxy-jb22j" Sep 9 05:36:08.195439 kubelet[2729]: E0909 05:36:08.195064 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:08.244702 kubelet[2729]: E0909 05:36:08.243837 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:08.245506 containerd[1536]: time="2025-09-09T05:36:08.245451851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jb22j,Uid:6a273fd2-179e-4d72-a8c7-01c75c7e7cee,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:08.287652 containerd[1536]: time="2025-09-09T05:36:08.287460631Z" level=info msg="connecting to shim 2c32e37ab4682b085b8032209f9e1936f50bee4ca5146c22dad644b6cafc4828" address="unix:///run/containerd/s/47f8cd0a1dc68ab2e707b470d0515ea95d96827320a86477488787329f1fdd7e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:08.345237 systemd[1]: Started cri-containerd-2c32e37ab4682b085b8032209f9e1936f50bee4ca5146c22dad644b6cafc4828.scope - libcontainer container 2c32e37ab4682b085b8032209f9e1936f50bee4ca5146c22dad644b6cafc4828. Sep 9 05:36:08.360524 kubelet[2729]: I0909 05:36:08.360460 2729 status_manager.go:890] "Failed to get status for pod" podUID="73c24ea6-d5c2-4253-8e96-ca7009cf6911" pod="tigera-operator/tigera-operator-755d956888-d4lnz" err="pods \"tigera-operator-755d956888-d4lnz\" is forbidden: User \"system:node:ci-4452.0.0-n-41a4a07365\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object" Sep 9 05:36:08.363135 kubelet[2729]: W0909 05:36:08.361702 2729 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4452.0.0-n-41a4a07365" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object Sep 9 05:36:08.363314 kubelet[2729]: W0909 05:36:08.363289 2729 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4452.0.0-n-41a4a07365" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object Sep 9 05:36:08.363923 kubelet[2729]: E0909 05:36:08.363347 2729 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4452.0.0-n-41a4a07365\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object" logger="UnhandledError" Sep 9 05:36:08.363923 kubelet[2729]: E0909 05:36:08.363629 2729 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4452.0.0-n-41a4a07365\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object" logger="UnhandledError" Sep 9 05:36:08.367625 systemd[1]: Created slice kubepods-besteffort-pod73c24ea6_d5c2_4253_8e96_ca7009cf6911.slice - libcontainer container kubepods-besteffort-pod73c24ea6_d5c2_4253_8e96_ca7009cf6911.slice. Sep 9 05:36:08.394490 kubelet[2729]: I0909 05:36:08.394435 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmh9\" (UniqueName: \"kubernetes.io/projected/73c24ea6-d5c2-4253-8e96-ca7009cf6911-kube-api-access-9bmh9\") pod \"tigera-operator-755d956888-d4lnz\" (UID: \"73c24ea6-d5c2-4253-8e96-ca7009cf6911\") " pod="tigera-operator/tigera-operator-755d956888-d4lnz" Sep 9 05:36:08.395425 kubelet[2729]: I0909 05:36:08.395390 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73c24ea6-d5c2-4253-8e96-ca7009cf6911-var-lib-calico\") pod \"tigera-operator-755d956888-d4lnz\" (UID: \"73c24ea6-d5c2-4253-8e96-ca7009cf6911\") " pod="tigera-operator/tigera-operator-755d956888-d4lnz" Sep 9 05:36:08.414090 containerd[1536]: time="2025-09-09T05:36:08.413972419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jb22j,Uid:6a273fd2-179e-4d72-a8c7-01c75c7e7cee,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c32e37ab4682b085b8032209f9e1936f50bee4ca5146c22dad644b6cafc4828\"" Sep 9 05:36:08.415633 kubelet[2729]: E0909 05:36:08.415604 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:08.419985 containerd[1536]: time="2025-09-09T05:36:08.419919269Z" level=info msg="CreateContainer within sandbox \"2c32e37ab4682b085b8032209f9e1936f50bee4ca5146c22dad644b6cafc4828\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:36:08.439637 containerd[1536]: time="2025-09-09T05:36:08.439539024Z" level=info msg="Container 746676f905a7215be6192e3a46bb7399d56d6e210ad70a2c0571727b3450d672: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:08.457768 containerd[1536]: time="2025-09-09T05:36:08.457084228Z" level=info msg="CreateContainer within sandbox \"2c32e37ab4682b085b8032209f9e1936f50bee4ca5146c22dad644b6cafc4828\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"746676f905a7215be6192e3a46bb7399d56d6e210ad70a2c0571727b3450d672\"" Sep 9 05:36:08.464937 containerd[1536]: time="2025-09-09T05:36:08.464870779Z" level=info msg="StartContainer for \"746676f905a7215be6192e3a46bb7399d56d6e210ad70a2c0571727b3450d672\"" Sep 9 05:36:08.467376 containerd[1536]: time="2025-09-09T05:36:08.467320333Z" level=info msg="connecting to shim 746676f905a7215be6192e3a46bb7399d56d6e210ad70a2c0571727b3450d672" address="unix:///run/containerd/s/47f8cd0a1dc68ab2e707b470d0515ea95d96827320a86477488787329f1fdd7e" protocol=ttrpc version=3 Sep 9 05:36:08.497903 systemd[1]: Started cri-containerd-746676f905a7215be6192e3a46bb7399d56d6e210ad70a2c0571727b3450d672.scope - libcontainer container 746676f905a7215be6192e3a46bb7399d56d6e210ad70a2c0571727b3450d672. Sep 9 05:36:08.577587 containerd[1536]: time="2025-09-09T05:36:08.576340179Z" level=info msg="StartContainer for \"746676f905a7215be6192e3a46bb7399d56d6e210ad70a2c0571727b3450d672\" returns successfully" Sep 9 05:36:09.034166 kubelet[2729]: E0909 05:36:09.034112 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:09.205397 kubelet[2729]: E0909 05:36:09.205341 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:09.205912 kubelet[2729]: E0909 05:36:09.205491 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:09.205912 kubelet[2729]: E0909 05:36:09.205774 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:09.506159 kubelet[2729]: E0909 05:36:09.505997 2729 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:09.506159 kubelet[2729]: E0909 05:36:09.506062 2729 projected.go:194] Error preparing data for projected volume kube-api-access-9bmh9 for pod tigera-operator/tigera-operator-755d956888-d4lnz: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:09.506159 kubelet[2729]: E0909 05:36:09.506149 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73c24ea6-d5c2-4253-8e96-ca7009cf6911-kube-api-access-9bmh9 podName:73c24ea6-d5c2-4253-8e96-ca7009cf6911 nodeName:}" failed. No retries permitted until 2025-09-09 05:36:10.006127425 +0000 UTC m=+8.188624132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9bmh9" (UniqueName: "kubernetes.io/projected/73c24ea6-d5c2-4253-8e96-ca7009cf6911-kube-api-access-9bmh9") pod "tigera-operator-755d956888-d4lnz" (UID: "73c24ea6-d5c2-4253-8e96-ca7009cf6911") : failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:10.175381 containerd[1536]: time="2025-09-09T05:36:10.175311359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-d4lnz,Uid:73c24ea6-d5c2-4253-8e96-ca7009cf6911,Namespace:tigera-operator,Attempt:0,}" Sep 9 05:36:10.208278 kubelet[2729]: E0909 05:36:10.208100 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:10.211365 containerd[1536]: time="2025-09-09T05:36:10.211270902Z" level=info msg="connecting to shim b2f3f9978369275a090221893e713e1034c7ef5a5e6f8e83b9aba00c9c86477e" address="unix:///run/containerd/s/14b67fb233a9dd96c479e7a8d922d9a9635cca6407653781da6bb902dfa51c8d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:10.267881 systemd[1]: Started cri-containerd-b2f3f9978369275a090221893e713e1034c7ef5a5e6f8e83b9aba00c9c86477e.scope - libcontainer container b2f3f9978369275a090221893e713e1034c7ef5a5e6f8e83b9aba00c9c86477e. Sep 9 05:36:10.347410 containerd[1536]: time="2025-09-09T05:36:10.347216353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-d4lnz,Uid:73c24ea6-d5c2-4253-8e96-ca7009cf6911,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b2f3f9978369275a090221893e713e1034c7ef5a5e6f8e83b9aba00c9c86477e\"" Sep 9 05:36:10.352809 containerd[1536]: time="2025-09-09T05:36:10.352711071Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 05:36:10.356029 systemd-resolved[1397]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Sep 9 05:36:11.639477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount214593648.mount: Deactivated successfully. Sep 9 05:36:12.815817 containerd[1536]: time="2025-09-09T05:36:12.815719925Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:12.817574 containerd[1536]: time="2025-09-09T05:36:12.817292264Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 05:36:12.818558 containerd[1536]: time="2025-09-09T05:36:12.818461934Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:12.823174 containerd[1536]: time="2025-09-09T05:36:12.822807819Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:12.824025 containerd[1536]: time="2025-09-09T05:36:12.823940602Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.471137795s" Sep 9 05:36:12.824025 containerd[1536]: time="2025-09-09T05:36:12.824019857Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 05:36:12.829474 containerd[1536]: time="2025-09-09T05:36:12.829411043Z" level=info msg="CreateContainer within sandbox \"b2f3f9978369275a090221893e713e1034c7ef5a5e6f8e83b9aba00c9c86477e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 05:36:12.844590 containerd[1536]: time="2025-09-09T05:36:12.842735022Z" level=info msg="Container ff84b6ed292e7182402303f8aab836787537426396d0dea2b519ad5167f5e8ba: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:12.848281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645560981.mount: Deactivated successfully. Sep 9 05:36:12.860981 containerd[1536]: time="2025-09-09T05:36:12.860922018Z" level=info msg="CreateContainer within sandbox \"b2f3f9978369275a090221893e713e1034c7ef5a5e6f8e83b9aba00c9c86477e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ff84b6ed292e7182402303f8aab836787537426396d0dea2b519ad5167f5e8ba\"" Sep 9 05:36:12.862245 containerd[1536]: time="2025-09-09T05:36:12.862159279Z" level=info msg="StartContainer for \"ff84b6ed292e7182402303f8aab836787537426396d0dea2b519ad5167f5e8ba\"" Sep 9 05:36:12.863569 containerd[1536]: time="2025-09-09T05:36:12.863422088Z" level=info msg="connecting to shim ff84b6ed292e7182402303f8aab836787537426396d0dea2b519ad5167f5e8ba" address="unix:///run/containerd/s/14b67fb233a9dd96c479e7a8d922d9a9635cca6407653781da6bb902dfa51c8d" protocol=ttrpc version=3 Sep 9 05:36:12.897848 systemd[1]: Started cri-containerd-ff84b6ed292e7182402303f8aab836787537426396d0dea2b519ad5167f5e8ba.scope - libcontainer container ff84b6ed292e7182402303f8aab836787537426396d0dea2b519ad5167f5e8ba. Sep 9 05:36:12.943404 containerd[1536]: time="2025-09-09T05:36:12.943340304Z" level=info msg="StartContainer for \"ff84b6ed292e7182402303f8aab836787537426396d0dea2b519ad5167f5e8ba\" returns successfully" Sep 9 05:36:13.153700 kubelet[2729]: E0909 05:36:13.153288 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:13.171013 kubelet[2729]: I0909 05:36:13.170852 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jb22j" podStartSLOduration=6.170785538 podStartE2EDuration="6.170785538s" podCreationTimestamp="2025-09-09 05:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:09.235097657 +0000 UTC m=+7.417594380" watchObservedRunningTime="2025-09-09 05:36:13.170785538 +0000 UTC m=+11.353282271" Sep 9 05:36:13.241906 kubelet[2729]: I0909 05:36:13.241779 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-d4lnz" podStartSLOduration=2.765690734 podStartE2EDuration="5.241719189s" podCreationTimestamp="2025-09-09 05:36:08 +0000 UTC" firstStartedPulling="2025-09-09 05:36:10.349439 +0000 UTC m=+8.531935703" lastFinishedPulling="2025-09-09 05:36:12.825467461 +0000 UTC m=+11.007964158" observedRunningTime="2025-09-09 05:36:13.241135478 +0000 UTC m=+11.423632212" watchObservedRunningTime="2025-09-09 05:36:13.241719189 +0000 UTC m=+11.424215914" Sep 9 05:36:20.391783 sudo[1796]: pam_unix(sudo:session): session closed for user root Sep 9 05:36:20.396726 sshd[1795]: Connection closed by 139.178.89.65 port 38888 Sep 9 05:36:20.400200 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Sep 9 05:36:20.406354 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:36:20.409347 systemd[1]: sshd@8-24.199.106.51:22-139.178.89.65:38888.service: Deactivated successfully. Sep 9 05:36:20.415149 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:36:20.416247 systemd[1]: session-9.scope: Consumed 6.390s CPU time, 166.7M memory peak. Sep 9 05:36:20.422138 systemd-logind[1492]: Removed session 9. Sep 9 05:36:22.866960 systemd[1]: Started sshd@9-24.199.106.51:22-119.187.164.226:54031.service - OpenSSH per-connection server daemon (119.187.164.226:54031). Sep 9 05:36:22.934597 sshd[3140]: Connection closed by 119.187.164.226 port 54031 Sep 9 05:36:22.934308 systemd[1]: sshd@9-24.199.106.51:22-119.187.164.226:54031.service: Deactivated successfully. Sep 9 05:36:26.351167 kubelet[2729]: I0909 05:36:26.351098 2729 status_manager.go:890] "Failed to get status for pod" podUID="7d5eef90-94b5-4dac-ab40-3bbcb82d282e" pod="calico-system/calico-typha-79b9cbf4f5-j85s2" err="pods \"calico-typha-79b9cbf4f5-j85s2\" is forbidden: User \"system:node:ci-4452.0.0-n-41a4a07365\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object" Sep 9 05:36:26.353604 kubelet[2729]: W0909 05:36:26.351216 2729 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4452.0.0-n-41a4a07365" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object Sep 9 05:36:26.353604 kubelet[2729]: E0909 05:36:26.351253 2729 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4452.0.0-n-41a4a07365\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object" logger="UnhandledError" Sep 9 05:36:26.353604 kubelet[2729]: W0909 05:36:26.351316 2729 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4452.0.0-n-41a4a07365" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object Sep 9 05:36:26.353604 kubelet[2729]: E0909 05:36:26.351333 2729 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4452.0.0-n-41a4a07365\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object" logger="UnhandledError" Sep 9 05:36:26.363406 systemd[1]: Created slice kubepods-besteffort-pod7d5eef90_94b5_4dac_ab40_3bbcb82d282e.slice - libcontainer container kubepods-besteffort-pod7d5eef90_94b5_4dac_ab40_3bbcb82d282e.slice. Sep 9 05:36:26.427834 kubelet[2729]: I0909 05:36:26.427692 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d5eef90-94b5-4dac-ab40-3bbcb82d282e-tigera-ca-bundle\") pod \"calico-typha-79b9cbf4f5-j85s2\" (UID: \"7d5eef90-94b5-4dac-ab40-3bbcb82d282e\") " pod="calico-system/calico-typha-79b9cbf4f5-j85s2" Sep 9 05:36:26.428214 kubelet[2729]: I0909 05:36:26.427791 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7d5eef90-94b5-4dac-ab40-3bbcb82d282e-typha-certs\") pod \"calico-typha-79b9cbf4f5-j85s2\" (UID: \"7d5eef90-94b5-4dac-ab40-3bbcb82d282e\") " pod="calico-system/calico-typha-79b9cbf4f5-j85s2" Sep 9 05:36:26.428214 kubelet[2729]: I0909 05:36:26.427963 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xltd\" (UniqueName: \"kubernetes.io/projected/7d5eef90-94b5-4dac-ab40-3bbcb82d282e-kube-api-access-4xltd\") pod \"calico-typha-79b9cbf4f5-j85s2\" (UID: \"7d5eef90-94b5-4dac-ab40-3bbcb82d282e\") " pod="calico-system/calico-typha-79b9cbf4f5-j85s2" Sep 9 05:36:26.646984 systemd[1]: Created slice kubepods-besteffort-pod8864ad20_e1a3_4e6e_bbe4_4e815fbf1c52.slice - libcontainer container kubepods-besteffort-pod8864ad20_e1a3_4e6e_bbe4_4e815fbf1c52.slice. Sep 9 05:36:26.731595 kubelet[2729]: I0909 05:36:26.731027 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-cni-net-dir\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732052 kubelet[2729]: I0909 05:36:26.732012 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggbp\" (UniqueName: \"kubernetes.io/projected/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-kube-api-access-7ggbp\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732169 kubelet[2729]: I0909 05:36:26.732086 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-lib-modules\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732169 kubelet[2729]: I0909 05:36:26.732105 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-node-certs\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732169 kubelet[2729]: I0909 05:36:26.732122 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-policysync\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732297 kubelet[2729]: I0909 05:36:26.732179 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-xtables-lock\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732297 kubelet[2729]: I0909 05:36:26.732199 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-cni-bin-dir\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732297 kubelet[2729]: I0909 05:36:26.732246 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-var-lib-calico\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732427 kubelet[2729]: I0909 05:36:26.732262 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-var-run-calico\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732427 kubelet[2729]: I0909 05:36:26.732325 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-cni-log-dir\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732427 kubelet[2729]: I0909 05:36:26.732342 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-flexvol-driver-host\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.732427 kubelet[2729]: I0909 05:36:26.732390 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52-tigera-ca-bundle\") pod \"calico-node-7czt5\" (UID: \"8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52\") " pod="calico-system/calico-node-7czt5" Sep 9 05:36:26.835450 kubelet[2729]: E0909 05:36:26.835396 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.835450 kubelet[2729]: W0909 05:36:26.835434 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.836353 kubelet[2729]: E0909 05:36:26.836306 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.836723 kubelet[2729]: E0909 05:36:26.836687 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.836833 kubelet[2729]: W0909 05:36:26.836738 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.836833 kubelet[2729]: E0909 05:36:26.836758 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.837037 kubelet[2729]: E0909 05:36:26.837019 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.837106 kubelet[2729]: W0909 05:36:26.837055 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.837106 kubelet[2729]: E0909 05:36:26.837069 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.837872 kubelet[2729]: E0909 05:36:26.837323 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.837872 kubelet[2729]: W0909 05:36:26.837339 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.837872 kubelet[2729]: E0909 05:36:26.837351 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.844263 kubelet[2729]: E0909 05:36:26.844204 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.844448 kubelet[2729]: W0909 05:36:26.844324 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.844448 kubelet[2729]: E0909 05:36:26.844361 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.845392 kubelet[2729]: E0909 05:36:26.845351 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.845392 kubelet[2729]: W0909 05:36:26.845379 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.845803 kubelet[2729]: E0909 05:36:26.845412 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.845908 kubelet[2729]: E0909 05:36:26.845885 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.845908 kubelet[2729]: W0909 05:36:26.845905 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.847467 kubelet[2729]: E0909 05:36:26.847436 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.847467 kubelet[2729]: W0909 05:36:26.847462 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.847467 kubelet[2729]: E0909 05:36:26.847484 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.848844 kubelet[2729]: E0909 05:36:26.848636 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.859108 kubelet[2729]: E0909 05:36:26.859030 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:26.859108 kubelet[2729]: W0909 05:36:26.859062 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:26.859108 kubelet[2729]: E0909 05:36:26.859090 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:26.963646 kubelet[2729]: E0909 05:36:26.963317 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q6kr" podUID="5cc829cd-94e6-4805-83c2-6c73a3a71220" Sep 9 05:36:27.002564 kubelet[2729]: E0909 05:36:27.002504 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.002811 kubelet[2729]: W0909 05:36:27.002533 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.002811 kubelet[2729]: E0909 05:36:27.002661 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.003781 kubelet[2729]: E0909 05:36:27.002906 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.003781 kubelet[2729]: W0909 05:36:27.002915 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.003781 kubelet[2729]: E0909 05:36:27.002927 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.003781 kubelet[2729]: E0909 05:36:27.003095 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.003781 kubelet[2729]: W0909 05:36:27.003104 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.003781 kubelet[2729]: E0909 05:36:27.003114 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.003781 kubelet[2729]: E0909 05:36:27.003347 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.003781 kubelet[2729]: W0909 05:36:27.003356 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.003781 kubelet[2729]: E0909 05:36:27.003367 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.003781 kubelet[2729]: E0909 05:36:27.003760 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.005307 kubelet[2729]: W0909 05:36:27.003776 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.005307 kubelet[2729]: E0909 05:36:27.003793 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.005307 kubelet[2729]: E0909 05:36:27.004122 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.005307 kubelet[2729]: W0909 05:36:27.004133 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.005307 kubelet[2729]: E0909 05:36:27.004145 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.005307 kubelet[2729]: E0909 05:36:27.004375 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.005307 kubelet[2729]: W0909 05:36:27.004385 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.005307 kubelet[2729]: E0909 05:36:27.004397 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.005307 kubelet[2729]: E0909 05:36:27.004652 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.005307 kubelet[2729]: W0909 05:36:27.004662 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.006078 kubelet[2729]: E0909 05:36:27.004673 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.006078 kubelet[2729]: E0909 05:36:27.004891 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.006078 kubelet[2729]: W0909 05:36:27.004901 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.006078 kubelet[2729]: E0909 05:36:27.004911 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.006078 kubelet[2729]: E0909 05:36:27.005098 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.006078 kubelet[2729]: W0909 05:36:27.005108 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.006078 kubelet[2729]: E0909 05:36:27.005121 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.006078 kubelet[2729]: E0909 05:36:27.005316 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.006078 kubelet[2729]: W0909 05:36:27.005325 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.006078 kubelet[2729]: E0909 05:36:27.005335 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.006489 kubelet[2729]: E0909 05:36:27.005728 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.006489 kubelet[2729]: W0909 05:36:27.005741 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.006489 kubelet[2729]: E0909 05:36:27.005754 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.006489 kubelet[2729]: E0909 05:36:27.006048 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.006489 kubelet[2729]: W0909 05:36:27.006058 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.006489 kubelet[2729]: E0909 05:36:27.006070 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.008127 kubelet[2729]: E0909 05:36:27.006586 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.008127 kubelet[2729]: W0909 05:36:27.006599 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.008127 kubelet[2729]: E0909 05:36:27.006615 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.008127 kubelet[2729]: E0909 05:36:27.006862 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.008127 kubelet[2729]: W0909 05:36:27.006873 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.008127 kubelet[2729]: E0909 05:36:27.006908 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.008127 kubelet[2729]: E0909 05:36:27.007319 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.008127 kubelet[2729]: W0909 05:36:27.007331 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.008127 kubelet[2729]: E0909 05:36:27.007343 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.008127 kubelet[2729]: E0909 05:36:27.007731 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.009304 kubelet[2729]: W0909 05:36:27.007741 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.009304 kubelet[2729]: E0909 05:36:27.007752 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.009304 kubelet[2729]: E0909 05:36:27.007922 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.009304 kubelet[2729]: W0909 05:36:27.007931 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.009304 kubelet[2729]: E0909 05:36:27.007941 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.009304 kubelet[2729]: E0909 05:36:27.008106 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.009304 kubelet[2729]: W0909 05:36:27.008115 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.009304 kubelet[2729]: E0909 05:36:27.008124 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.009304 kubelet[2729]: E0909 05:36:27.008284 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.009304 kubelet[2729]: W0909 05:36:27.008292 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.010020 kubelet[2729]: E0909 05:36:27.008309 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.035995 kubelet[2729]: E0909 05:36:27.035943 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.035995 kubelet[2729]: W0909 05:36:27.035982 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.036229 kubelet[2729]: E0909 05:36:27.036016 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.036229 kubelet[2729]: I0909 05:36:27.036067 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5cc829cd-94e6-4805-83c2-6c73a3a71220-socket-dir\") pod \"csi-node-driver-9q6kr\" (UID: \"5cc829cd-94e6-4805-83c2-6c73a3a71220\") " pod="calico-system/csi-node-driver-9q6kr" Sep 9 05:36:27.037908 kubelet[2729]: E0909 05:36:27.037849 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.037908 kubelet[2729]: W0909 05:36:27.037896 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.038329 kubelet[2729]: E0909 05:36:27.037953 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.038626 kubelet[2729]: I0909 05:36:27.038593 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpnwg\" (UniqueName: \"kubernetes.io/projected/5cc829cd-94e6-4805-83c2-6c73a3a71220-kube-api-access-gpnwg\") pod \"csi-node-driver-9q6kr\" (UID: \"5cc829cd-94e6-4805-83c2-6c73a3a71220\") " pod="calico-system/csi-node-driver-9q6kr" Sep 9 05:36:27.039104 kubelet[2729]: E0909 05:36:27.039071 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.039229 kubelet[2729]: W0909 05:36:27.039123 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.039656 kubelet[2729]: E0909 05:36:27.039617 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.041134 kubelet[2729]: E0909 05:36:27.041102 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.041134 kubelet[2729]: W0909 05:36:27.041130 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.041317 kubelet[2729]: E0909 05:36:27.041190 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.041645 kubelet[2729]: I0909 05:36:27.041409 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5cc829cd-94e6-4805-83c2-6c73a3a71220-registration-dir\") pod \"csi-node-driver-9q6kr\" (UID: \"5cc829cd-94e6-4805-83c2-6c73a3a71220\") " pod="calico-system/csi-node-driver-9q6kr" Sep 9 05:36:27.041793 kubelet[2729]: E0909 05:36:27.041713 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.041793 kubelet[2729]: W0909 05:36:27.041730 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.041793 kubelet[2729]: E0909 05:36:27.041772 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.042783 kubelet[2729]: E0909 05:36:27.042757 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.042783 kubelet[2729]: W0909 05:36:27.042780 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.042912 kubelet[2729]: E0909 05:36:27.042798 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.043312 kubelet[2729]: E0909 05:36:27.043279 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.043312 kubelet[2729]: W0909 05:36:27.043304 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.043521 kubelet[2729]: E0909 05:36:27.043326 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.044316 kubelet[2729]: E0909 05:36:27.044282 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.044316 kubelet[2729]: W0909 05:36:27.044306 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.044431 kubelet[2729]: E0909 05:36:27.044324 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.045627 kubelet[2729]: E0909 05:36:27.045540 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.045627 kubelet[2729]: W0909 05:36:27.045616 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.045760 kubelet[2729]: E0909 05:36:27.045635 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.046171 kubelet[2729]: E0909 05:36:27.046112 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.046171 kubelet[2729]: W0909 05:36:27.046158 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.046294 kubelet[2729]: E0909 05:36:27.046175 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.046294 kubelet[2729]: I0909 05:36:27.046240 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5cc829cd-94e6-4805-83c2-6c73a3a71220-varrun\") pod \"csi-node-driver-9q6kr\" (UID: \"5cc829cd-94e6-4805-83c2-6c73a3a71220\") " pod="calico-system/csi-node-driver-9q6kr" Sep 9 05:36:27.046642 kubelet[2729]: E0909 05:36:27.046611 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.046642 kubelet[2729]: W0909 05:36:27.046638 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.047039 kubelet[2729]: E0909 05:36:27.046843 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.047417 kubelet[2729]: E0909 05:36:27.047389 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.047417 kubelet[2729]: W0909 05:36:27.047413 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.047597 kubelet[2729]: E0909 05:36:27.047430 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.047879 kubelet[2729]: I0909 05:36:27.047812 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cc829cd-94e6-4805-83c2-6c73a3a71220-kubelet-dir\") pod \"csi-node-driver-9q6kr\" (UID: \"5cc829cd-94e6-4805-83c2-6c73a3a71220\") " pod="calico-system/csi-node-driver-9q6kr" Sep 9 05:36:27.048352 kubelet[2729]: E0909 05:36:27.048308 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.048352 kubelet[2729]: W0909 05:36:27.048332 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.048352 kubelet[2729]: E0909 05:36:27.048349 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.048786 kubelet[2729]: E0909 05:36:27.048763 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.048786 kubelet[2729]: W0909 05:36:27.048785 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.049268 kubelet[2729]: E0909 05:36:27.049234 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.049588 kubelet[2729]: E0909 05:36:27.049567 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.049588 kubelet[2729]: W0909 05:36:27.049585 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.049693 kubelet[2729]: E0909 05:36:27.049600 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.149996 kubelet[2729]: E0909 05:36:27.149956 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.149996 kubelet[2729]: W0909 05:36:27.149985 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.149996 kubelet[2729]: E0909 05:36:27.150009 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.150377 kubelet[2729]: E0909 05:36:27.150356 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.150377 kubelet[2729]: W0909 05:36:27.150371 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.150483 kubelet[2729]: E0909 05:36:27.150426 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.150719 kubelet[2729]: E0909 05:36:27.150699 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.150719 kubelet[2729]: W0909 05:36:27.150712 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.150858 kubelet[2729]: E0909 05:36:27.150732 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.151075 kubelet[2729]: E0909 05:36:27.151042 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.151137 kubelet[2729]: W0909 05:36:27.151074 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.151483 kubelet[2729]: E0909 05:36:27.151461 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.151614 kubelet[2729]: W0909 05:36:27.151595 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.151614 kubelet[2729]: E0909 05:36:27.151613 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.151724 kubelet[2729]: E0909 05:36:27.151567 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.151946 kubelet[2729]: E0909 05:36:27.151925 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.151946 kubelet[2729]: W0909 05:36:27.151939 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.152049 kubelet[2729]: E0909 05:36:27.151957 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.152300 kubelet[2729]: E0909 05:36:27.152277 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.152300 kubelet[2729]: W0909 05:36:27.152299 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.152500 kubelet[2729]: E0909 05:36:27.152468 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.153024 kubelet[2729]: E0909 05:36:27.152999 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.153024 kubelet[2729]: W0909 05:36:27.153015 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.153024 kubelet[2729]: E0909 05:36:27.153028 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.153778 kubelet[2729]: E0909 05:36:27.153754 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.153778 kubelet[2729]: W0909 05:36:27.153781 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.154498 kubelet[2729]: E0909 05:36:27.153862 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.154498 kubelet[2729]: E0909 05:36:27.154453 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.154498 kubelet[2729]: W0909 05:36:27.154467 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.154680 kubelet[2729]: E0909 05:36:27.154649 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.155631 kubelet[2729]: E0909 05:36:27.155601 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.155631 kubelet[2729]: W0909 05:36:27.155626 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.155772 kubelet[2729]: E0909 05:36:27.155721 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.155926 kubelet[2729]: E0909 05:36:27.155905 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.155926 kubelet[2729]: W0909 05:36:27.155924 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.156026 kubelet[2729]: E0909 05:36:27.156012 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.156159 kubelet[2729]: E0909 05:36:27.156142 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.156159 kubelet[2729]: W0909 05:36:27.156154 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.156472 kubelet[2729]: E0909 05:36:27.156261 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.156472 kubelet[2729]: E0909 05:36:27.156304 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.156472 kubelet[2729]: W0909 05:36:27.156310 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.156472 kubelet[2729]: E0909 05:36:27.156423 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.156472 kubelet[2729]: E0909 05:36:27.156464 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.156472 kubelet[2729]: W0909 05:36:27.156469 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.156905 kubelet[2729]: E0909 05:36:27.156486 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.156905 kubelet[2729]: E0909 05:36:27.156715 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.156905 kubelet[2729]: W0909 05:36:27.156724 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.156905 kubelet[2729]: E0909 05:36:27.156743 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.157061 kubelet[2729]: E0909 05:36:27.156990 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.157061 kubelet[2729]: W0909 05:36:27.156998 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.157061 kubelet[2729]: E0909 05:36:27.157017 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.157598 kubelet[2729]: E0909 05:36:27.157241 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.157598 kubelet[2729]: W0909 05:36:27.157256 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.157598 kubelet[2729]: E0909 05:36:27.157278 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.158642 kubelet[2729]: E0909 05:36:27.158616 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.158642 kubelet[2729]: W0909 05:36:27.158634 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.158787 kubelet[2729]: E0909 05:36:27.158716 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.158855 kubelet[2729]: E0909 05:36:27.158839 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.158855 kubelet[2729]: W0909 05:36:27.158850 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.158967 kubelet[2729]: E0909 05:36:27.158952 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.159028 kubelet[2729]: E0909 05:36:27.159005 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.159028 kubelet[2729]: W0909 05:36:27.159011 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.159112 kubelet[2729]: E0909 05:36:27.159083 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.159240 kubelet[2729]: E0909 05:36:27.159223 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.159240 kubelet[2729]: W0909 05:36:27.159235 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.159326 kubelet[2729]: E0909 05:36:27.159253 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.159436 kubelet[2729]: E0909 05:36:27.159421 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.159436 kubelet[2729]: W0909 05:36:27.159432 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.159534 kubelet[2729]: E0909 05:36:27.159440 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.159853 kubelet[2729]: E0909 05:36:27.159834 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.159931 kubelet[2729]: W0909 05:36:27.159858 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.159931 kubelet[2729]: E0909 05:36:27.159870 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.160759 kubelet[2729]: E0909 05:36:27.160738 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.160759 kubelet[2729]: W0909 05:36:27.160760 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.160867 kubelet[2729]: E0909 05:36:27.160772 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.521843 kubelet[2729]: E0909 05:36:27.521779 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.521843 kubelet[2729]: W0909 05:36:27.521816 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.521843 kubelet[2729]: E0909 05:36:27.521850 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.529403 kubelet[2729]: E0909 05:36:27.528927 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.532905 kubelet[2729]: W0909 05:36:27.528955 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.532905 kubelet[2729]: E0909 05:36:27.532823 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.532905 kubelet[2729]: E0909 05:36:27.529845 2729 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:27.533743 kubelet[2729]: E0909 05:36:27.532958 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d5eef90-94b5-4dac-ab40-3bbcb82d282e-tigera-ca-bundle podName:7d5eef90-94b5-4dac-ab40-3bbcb82d282e nodeName:}" failed. No retries permitted until 2025-09-09 05:36:28.032937202 +0000 UTC m=+26.215433907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/7d5eef90-94b5-4dac-ab40-3bbcb82d282e-tigera-ca-bundle") pod "calico-typha-79b9cbf4f5-j85s2" (UID: "7d5eef90-94b5-4dac-ab40-3bbcb82d282e") : failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:27.538279 kubelet[2729]: E0909 05:36:27.537916 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.539092 kubelet[2729]: W0909 05:36:27.538622 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.539092 kubelet[2729]: E0909 05:36:27.538920 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.557243 kubelet[2729]: E0909 05:36:27.557128 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.557690 kubelet[2729]: W0909 05:36:27.557158 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.557690 kubelet[2729]: E0909 05:36:27.557305 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.658130 kubelet[2729]: E0909 05:36:27.658090 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.658130 kubelet[2729]: W0909 05:36:27.658118 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.658130 kubelet[2729]: E0909 05:36:27.658143 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.685481 kubelet[2729]: E0909 05:36:27.685442 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.685481 kubelet[2729]: W0909 05:36:27.685468 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.685481 kubelet[2729]: E0909 05:36:27.685491 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.759564 kubelet[2729]: E0909 05:36:27.759357 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.759564 kubelet[2729]: W0909 05:36:27.759392 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.759564 kubelet[2729]: E0909 05:36:27.759422 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.857691 containerd[1536]: time="2025-09-09T05:36:27.856900695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7czt5,Uid:8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:27.860938 kubelet[2729]: E0909 05:36:27.860804 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.860938 kubelet[2729]: W0909 05:36:27.860857 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.860938 kubelet[2729]: E0909 05:36:27.860886 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:27.894127 containerd[1536]: time="2025-09-09T05:36:27.892824872Z" level=info msg="connecting to shim b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8" address="unix:///run/containerd/s/bb82ded84243bc103cf9750e653d77156e947658821bec420964caa92d2b4cd4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:27.938943 systemd[1]: Started cri-containerd-b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8.scope - libcontainer container b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8. Sep 9 05:36:27.962322 kubelet[2729]: E0909 05:36:27.962200 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:27.962699 kubelet[2729]: W0909 05:36:27.962443 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:27.963630 kubelet[2729]: E0909 05:36:27.963063 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:28.001310 containerd[1536]: time="2025-09-09T05:36:28.001195216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7czt5,Uid:8864ad20-e1a3-4e6e-bbe4-4e815fbf1c52,Namespace:calico-system,Attempt:0,} returns sandbox id \"b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8\"" Sep 9 05:36:28.005128 containerd[1536]: time="2025-09-09T05:36:28.005082436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 05:36:28.067145 kubelet[2729]: E0909 05:36:28.067003 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:28.067145 kubelet[2729]: W0909 05:36:28.067075 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:28.067145 kubelet[2729]: E0909 05:36:28.067101 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:28.068199 kubelet[2729]: E0909 05:36:28.068032 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:28.068199 kubelet[2729]: W0909 05:36:28.068050 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:28.068199 kubelet[2729]: E0909 05:36:28.068066 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:28.068462 kubelet[2729]: E0909 05:36:28.068447 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:28.068573 kubelet[2729]: W0909 05:36:28.068539 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:28.068722 kubelet[2729]: E0909 05:36:28.068637 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:28.068876 kubelet[2729]: E0909 05:36:28.068866 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:28.068925 kubelet[2729]: W0909 05:36:28.068916 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:28.068967 kubelet[2729]: E0909 05:36:28.068959 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:28.069577 kubelet[2729]: E0909 05:36:28.069360 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:28.069577 kubelet[2729]: W0909 05:36:28.069380 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:28.069577 kubelet[2729]: E0909 05:36:28.069392 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:28.070925 kubelet[2729]: E0909 05:36:28.070906 2729 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 05:36:28.071199 kubelet[2729]: W0909 05:36:28.071135 2729 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 05:36:28.071357 kubelet[2729]: E0909 05:36:28.071299 2729 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 05:36:28.169578 kubelet[2729]: E0909 05:36:28.168802 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:28.170760 containerd[1536]: time="2025-09-09T05:36:28.170610265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79b9cbf4f5-j85s2,Uid:7d5eef90-94b5-4dac-ab40-3bbcb82d282e,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:28.202364 containerd[1536]: time="2025-09-09T05:36:28.202257433Z" level=info msg="connecting to shim 984a6bb178fdc72ef82fd12ddda33533f1c48acd1504da87c0ce498acfac75bf" address="unix:///run/containerd/s/573ae457f90d7816ca13dbeff823e36cc6445b34a424d0cdd1f909c04b6f2bb5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:28.259841 systemd[1]: Started cri-containerd-984a6bb178fdc72ef82fd12ddda33533f1c48acd1504da87c0ce498acfac75bf.scope - libcontainer container 984a6bb178fdc72ef82fd12ddda33533f1c48acd1504da87c0ce498acfac75bf. Sep 9 05:36:28.364841 containerd[1536]: time="2025-09-09T05:36:28.363317383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79b9cbf4f5-j85s2,Uid:7d5eef90-94b5-4dac-ab40-3bbcb82d282e,Namespace:calico-system,Attempt:0,} returns sandbox id \"984a6bb178fdc72ef82fd12ddda33533f1c48acd1504da87c0ce498acfac75bf\"" Sep 9 05:36:28.366428 kubelet[2729]: E0909 05:36:28.366073 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:29.117509 kubelet[2729]: E0909 05:36:29.117386 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q6kr" podUID="5cc829cd-94e6-4805-83c2-6c73a3a71220" Sep 9 05:36:29.278068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700180908.mount: Deactivated successfully. Sep 9 05:36:29.459127 containerd[1536]: time="2025-09-09T05:36:29.458903729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:29.462060 containerd[1536]: time="2025-09-09T05:36:29.461972965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 9 05:36:29.462931 containerd[1536]: time="2025-09-09T05:36:29.462865957Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:29.466218 containerd[1536]: time="2025-09-09T05:36:29.466020121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:29.468712 containerd[1536]: time="2025-09-09T05:36:29.468648328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.463317049s" Sep 9 05:36:29.468712 containerd[1536]: time="2025-09-09T05:36:29.468714013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 05:36:29.471731 containerd[1536]: time="2025-09-09T05:36:29.470928842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 05:36:29.474305 containerd[1536]: time="2025-09-09T05:36:29.473761349Z" level=info msg="CreateContainer within sandbox \"b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 05:36:29.486587 containerd[1536]: time="2025-09-09T05:36:29.485830191Z" level=info msg="Container ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:29.492356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269393463.mount: Deactivated successfully. Sep 9 05:36:29.510919 containerd[1536]: time="2025-09-09T05:36:29.510707839Z" level=info msg="CreateContainer within sandbox \"b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe\"" Sep 9 05:36:29.512271 containerd[1536]: time="2025-09-09T05:36:29.512192671Z" level=info msg="StartContainer for \"ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe\"" Sep 9 05:36:29.517622 containerd[1536]: time="2025-09-09T05:36:29.517518493Z" level=info msg="connecting to shim ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe" address="unix:///run/containerd/s/bb82ded84243bc103cf9750e653d77156e947658821bec420964caa92d2b4cd4" protocol=ttrpc version=3 Sep 9 05:36:29.555293 systemd[1]: Started cri-containerd-ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe.scope - libcontainer container ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe. Sep 9 05:36:29.632649 containerd[1536]: time="2025-09-09T05:36:29.632594238Z" level=info msg="StartContainer for \"ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe\" returns successfully" Sep 9 05:36:29.664178 systemd[1]: cri-containerd-ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe.scope: Deactivated successfully. Sep 9 05:36:29.664533 systemd[1]: cri-containerd-ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe.scope: Consumed 41ms CPU time, 6.1M memory peak, 2.1M written to disk. Sep 9 05:36:29.670477 containerd[1536]: time="2025-09-09T05:36:29.670419242Z" level=info msg="received exit event container_id:\"ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe\" id:\"ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe\" pid:3364 exited_at:{seconds:1757396189 nanos:666260812}" Sep 9 05:36:29.670757 containerd[1536]: time="2025-09-09T05:36:29.670605415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe\" id:\"ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe\" pid:3364 exited_at:{seconds:1757396189 nanos:666260812}" Sep 9 05:36:30.196197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecdb6376647246263e401e1a796ec893a14ca0a4f163906f558f0b1a90602efe-rootfs.mount: Deactivated successfully. Sep 9 05:36:31.118396 kubelet[2729]: E0909 05:36:31.117926 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q6kr" podUID="5cc829cd-94e6-4805-83c2-6c73a3a71220" Sep 9 05:36:32.375620 containerd[1536]: time="2025-09-09T05:36:32.375532132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:32.377453 containerd[1536]: time="2025-09-09T05:36:32.377387833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 9 05:36:32.379376 containerd[1536]: time="2025-09-09T05:36:32.379311522Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:32.380877 containerd[1536]: time="2025-09-09T05:36:32.380811948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:32.383012 containerd[1536]: time="2025-09-09T05:36:32.382855998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.911881219s" Sep 9 05:36:32.383012 containerd[1536]: time="2025-09-09T05:36:32.382903334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 05:36:32.394617 containerd[1536]: time="2025-09-09T05:36:32.392718654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 05:36:32.446873 containerd[1536]: time="2025-09-09T05:36:32.446823776Z" level=info msg="CreateContainer within sandbox \"984a6bb178fdc72ef82fd12ddda33533f1c48acd1504da87c0ce498acfac75bf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 05:36:32.463306 containerd[1536]: time="2025-09-09T05:36:32.463239452Z" level=info msg="Container 231c6c8af38059de2a4ba561cca69879a60c370c0553dca8249a7a7175634b58: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:32.478795 containerd[1536]: time="2025-09-09T05:36:32.478727183Z" level=info msg="CreateContainer within sandbox \"984a6bb178fdc72ef82fd12ddda33533f1c48acd1504da87c0ce498acfac75bf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"231c6c8af38059de2a4ba561cca69879a60c370c0553dca8249a7a7175634b58\"" Sep 9 05:36:32.480304 containerd[1536]: time="2025-09-09T05:36:32.480242670Z" level=info msg="StartContainer for \"231c6c8af38059de2a4ba561cca69879a60c370c0553dca8249a7a7175634b58\"" Sep 9 05:36:32.483031 containerd[1536]: time="2025-09-09T05:36:32.482974455Z" level=info msg="connecting to shim 231c6c8af38059de2a4ba561cca69879a60c370c0553dca8249a7a7175634b58" address="unix:///run/containerd/s/573ae457f90d7816ca13dbeff823e36cc6445b34a424d0cdd1f909c04b6f2bb5" protocol=ttrpc version=3 Sep 9 05:36:32.517911 systemd[1]: Started cri-containerd-231c6c8af38059de2a4ba561cca69879a60c370c0553dca8249a7a7175634b58.scope - libcontainer container 231c6c8af38059de2a4ba561cca69879a60c370c0553dca8249a7a7175634b58. Sep 9 05:36:32.625010 containerd[1536]: time="2025-09-09T05:36:32.624945566Z" level=info msg="StartContainer for \"231c6c8af38059de2a4ba561cca69879a60c370c0553dca8249a7a7175634b58\" returns successfully" Sep 9 05:36:33.117390 kubelet[2729]: E0909 05:36:33.117310 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q6kr" podUID="5cc829cd-94e6-4805-83c2-6c73a3a71220" Sep 9 05:36:33.301578 kubelet[2729]: E0909 05:36:33.300787 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:33.340663 kubelet[2729]: I0909 05:36:33.340466 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79b9cbf4f5-j85s2" podStartSLOduration=3.315650507 podStartE2EDuration="7.340443196s" podCreationTimestamp="2025-09-09 05:36:26 +0000 UTC" firstStartedPulling="2025-09-09 05:36:28.367347562 +0000 UTC m=+26.549844272" lastFinishedPulling="2025-09-09 05:36:32.392140259 +0000 UTC m=+30.574636961" observedRunningTime="2025-09-09 05:36:33.323303653 +0000 UTC m=+31.505800371" watchObservedRunningTime="2025-09-09 05:36:33.340443196 +0000 UTC m=+31.522939916" Sep 9 05:36:34.303182 kubelet[2729]: E0909 05:36:34.303141 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:35.117902 kubelet[2729]: E0909 05:36:35.117846 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q6kr" podUID="5cc829cd-94e6-4805-83c2-6c73a3a71220" Sep 9 05:36:35.306661 kubelet[2729]: E0909 05:36:35.306613 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:37.011347 containerd[1536]: time="2025-09-09T05:36:37.010588015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:37.012574 containerd[1536]: time="2025-09-09T05:36:37.012509467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 05:36:37.013955 containerd[1536]: time="2025-09-09T05:36:37.013898164Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:37.018998 containerd[1536]: time="2025-09-09T05:36:37.018901134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:37.021208 containerd[1536]: time="2025-09-09T05:36:37.021037832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.628260788s" Sep 9 05:36:37.021446 containerd[1536]: time="2025-09-09T05:36:37.021416240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 05:36:37.029413 containerd[1536]: time="2025-09-09T05:36:37.028450165Z" level=info msg="CreateContainer within sandbox \"b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 05:36:37.046646 containerd[1536]: time="2025-09-09T05:36:37.045813929Z" level=info msg="Container 297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:37.058016 containerd[1536]: time="2025-09-09T05:36:37.057948708Z" level=info msg="CreateContainer within sandbox \"b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b\"" Sep 9 05:36:37.059769 containerd[1536]: time="2025-09-09T05:36:37.059719572Z" level=info msg="StartContainer for \"297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b\"" Sep 9 05:36:37.062210 containerd[1536]: time="2025-09-09T05:36:37.062143044Z" level=info msg="connecting to shim 297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b" address="unix:///run/containerd/s/bb82ded84243bc103cf9750e653d77156e947658821bec420964caa92d2b4cd4" protocol=ttrpc version=3 Sep 9 05:36:37.111842 systemd[1]: Started cri-containerd-297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b.scope - libcontainer container 297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b. Sep 9 05:36:37.117108 kubelet[2729]: E0909 05:36:37.117021 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9q6kr" podUID="5cc829cd-94e6-4805-83c2-6c73a3a71220" Sep 9 05:36:37.179993 containerd[1536]: time="2025-09-09T05:36:37.179941621Z" level=info msg="StartContainer for \"297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b\" returns successfully" Sep 9 05:36:38.079336 systemd[1]: cri-containerd-297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b.scope: Deactivated successfully. Sep 9 05:36:38.079677 systemd[1]: cri-containerd-297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b.scope: Consumed 690ms CPU time, 171.6M memory peak, 14.5M read from disk, 171.3M written to disk. Sep 9 05:36:38.156748 containerd[1536]: time="2025-09-09T05:36:38.156687646Z" level=info msg="received exit event container_id:\"297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b\" id:\"297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b\" pid:3473 exited_at:{seconds:1757396198 nanos:155981438}" Sep 9 05:36:38.159594 containerd[1536]: time="2025-09-09T05:36:38.159053435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b\" id:\"297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b\" pid:3473 exited_at:{seconds:1757396198 nanos:155981438}" Sep 9 05:36:38.198852 kubelet[2729]: I0909 05:36:38.198674 2729 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 05:36:38.244427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-297b454a17045ff58e2a9e60c3ee09d65f6f8fe0228c729d53ac72e15c1db69b-rootfs.mount: Deactivated successfully. Sep 9 05:36:38.298959 kubelet[2729]: W0909 05:36:38.298749 2729 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4452.0.0-n-41a4a07365" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object Sep 9 05:36:38.301842 kubelet[2729]: E0909 05:36:38.301685 2729 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4452.0.0-n-41a4a07365\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object" logger="UnhandledError" Sep 9 05:36:38.307508 kubelet[2729]: I0909 05:36:38.304128 2729 status_manager.go:890] "Failed to get status for pod" podUID="fac6e703-38ee-41ec-b92c-fe35196c41bc" pod="kube-system/coredns-668d6bf9bc-6p8l6" err="pods \"coredns-668d6bf9bc-6p8l6\" is forbidden: User \"system:node:ci-4452.0.0-n-41a4a07365\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4452.0.0-n-41a4a07365' and this object" Sep 9 05:36:38.327510 systemd[1]: Created slice kubepods-burstable-podfac6e703_38ee_41ec_b92c_fe35196c41bc.slice - libcontainer container kubepods-burstable-podfac6e703_38ee_41ec_b92c_fe35196c41bc.slice. Sep 9 05:36:38.353362 systemd[1]: Created slice kubepods-besteffort-pod4d2c5372_f70a_4bb7_a7eb_f4b8172296a7.slice - libcontainer container kubepods-besteffort-pod4d2c5372_f70a_4bb7_a7eb_f4b8172296a7.slice. Sep 9 05:36:38.374802 systemd[1]: Created slice kubepods-burstable-pod73ac7708_2947_4b1b_befb_e7b4c7e7afc5.slice - libcontainer container kubepods-burstable-pod73ac7708_2947_4b1b_befb_e7b4c7e7afc5.slice. Sep 9 05:36:38.396886 systemd[1]: Created slice kubepods-besteffort-podf35fef50_a2f4_447d_b45d_e83823339dd4.slice - libcontainer container kubepods-besteffort-podf35fef50_a2f4_447d_b45d_e83823339dd4.slice. Sep 9 05:36:38.404969 kubelet[2729]: I0909 05:36:38.404682 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fac6e703-38ee-41ec-b92c-fe35196c41bc-config-volume\") pod \"coredns-668d6bf9bc-6p8l6\" (UID: \"fac6e703-38ee-41ec-b92c-fe35196c41bc\") " pod="kube-system/coredns-668d6bf9bc-6p8l6" Sep 9 05:36:38.404969 kubelet[2729]: I0909 05:36:38.404769 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shft2\" (UniqueName: \"kubernetes.io/projected/fac6e703-38ee-41ec-b92c-fe35196c41bc-kube-api-access-shft2\") pod \"coredns-668d6bf9bc-6p8l6\" (UID: \"fac6e703-38ee-41ec-b92c-fe35196c41bc\") " pod="kube-system/coredns-668d6bf9bc-6p8l6" Sep 9 05:36:38.412058 containerd[1536]: time="2025-09-09T05:36:38.411840396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 05:36:38.419950 systemd[1]: Created slice kubepods-besteffort-podf446029d_ede7_451f_86c1_ecf8a42526d0.slice - libcontainer container kubepods-besteffort-podf446029d_ede7_451f_86c1_ecf8a42526d0.slice. Sep 9 05:36:38.458173 systemd[1]: Created slice kubepods-besteffort-podbb5c32bf_ef45_46d8_b3c3_043c0a4c49f4.slice - libcontainer container kubepods-besteffort-podbb5c32bf_ef45_46d8_b3c3_043c0a4c49f4.slice. Sep 9 05:36:38.471751 systemd[1]: Created slice kubepods-besteffort-pod679a70ba_6cff_4a0a_9a8f_12021cd530ce.slice - libcontainer container kubepods-besteffort-pod679a70ba_6cff_4a0a_9a8f_12021cd530ce.slice. Sep 9 05:36:38.505583 kubelet[2729]: I0909 05:36:38.505491 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f446029d-ede7-451f-86c1-ecf8a42526d0-whisker-backend-key-pair\") pod \"whisker-fc946f9d8-ws2xc\" (UID: \"f446029d-ede7-451f-86c1-ecf8a42526d0\") " pod="calico-system/whisker-fc946f9d8-ws2xc" Sep 9 05:36:38.505583 kubelet[2729]: I0909 05:36:38.505576 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f35fef50-a2f4-447d-b45d-e83823339dd4-calico-apiserver-certs\") pod \"calico-apiserver-79f7b7478c-6n9zh\" (UID: \"f35fef50-a2f4-447d-b45d-e83823339dd4\") " pod="calico-apiserver/calico-apiserver-79f7b7478c-6n9zh" Sep 9 05:36:38.505854 kubelet[2729]: I0909 05:36:38.505668 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5ggf\" (UniqueName: \"kubernetes.io/projected/bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4-kube-api-access-s5ggf\") pod \"calico-kube-controllers-5cbc46b88c-545l7\" (UID: \"bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4\") " pod="calico-system/calico-kube-controllers-5cbc46b88c-545l7" Sep 9 05:36:38.505854 kubelet[2729]: I0909 05:36:38.505708 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/679a70ba-6cff-4a0a-9a8f-12021cd530ce-calico-apiserver-certs\") pod \"calico-apiserver-79f7b7478c-hnm6t\" (UID: \"679a70ba-6cff-4a0a-9a8f-12021cd530ce\") " pod="calico-apiserver/calico-apiserver-79f7b7478c-hnm6t" Sep 9 05:36:38.505854 kubelet[2729]: I0909 05:36:38.505738 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgw7x\" (UniqueName: \"kubernetes.io/projected/679a70ba-6cff-4a0a-9a8f-12021cd530ce-kube-api-access-xgw7x\") pod \"calico-apiserver-79f7b7478c-hnm6t\" (UID: \"679a70ba-6cff-4a0a-9a8f-12021cd530ce\") " pod="calico-apiserver/calico-apiserver-79f7b7478c-hnm6t" Sep 9 05:36:38.505854 kubelet[2729]: I0909 05:36:38.505766 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73ac7708-2947-4b1b-befb-e7b4c7e7afc5-config-volume\") pod \"coredns-668d6bf9bc-t7jv2\" (UID: \"73ac7708-2947-4b1b-befb-e7b4c7e7afc5\") " pod="kube-system/coredns-668d6bf9bc-t7jv2" Sep 9 05:36:38.505854 kubelet[2729]: I0909 05:36:38.505787 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tjhf\" (UniqueName: \"kubernetes.io/projected/4d2c5372-f70a-4bb7-a7eb-f4b8172296a7-kube-api-access-9tjhf\") pod \"goldmane-54d579b49d-6svxj\" (UID: \"4d2c5372-f70a-4bb7-a7eb-f4b8172296a7\") " pod="calico-system/goldmane-54d579b49d-6svxj" Sep 9 05:36:38.506081 kubelet[2729]: I0909 05:36:38.505841 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d2c5372-f70a-4bb7-a7eb-f4b8172296a7-config\") pod \"goldmane-54d579b49d-6svxj\" (UID: \"4d2c5372-f70a-4bb7-a7eb-f4b8172296a7\") " pod="calico-system/goldmane-54d579b49d-6svxj" Sep 9 05:36:38.506081 kubelet[2729]: I0909 05:36:38.505866 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d2c5372-f70a-4bb7-a7eb-f4b8172296a7-goldmane-key-pair\") pod \"goldmane-54d579b49d-6svxj\" (UID: \"4d2c5372-f70a-4bb7-a7eb-f4b8172296a7\") " pod="calico-system/goldmane-54d579b49d-6svxj" Sep 9 05:36:38.506081 kubelet[2729]: I0909 05:36:38.505888 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbwm2\" (UniqueName: \"kubernetes.io/projected/f446029d-ede7-451f-86c1-ecf8a42526d0-kube-api-access-kbwm2\") pod \"whisker-fc946f9d8-ws2xc\" (UID: \"f446029d-ede7-451f-86c1-ecf8a42526d0\") " pod="calico-system/whisker-fc946f9d8-ws2xc" Sep 9 05:36:38.506081 kubelet[2729]: I0909 05:36:38.505937 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f446029d-ede7-451f-86c1-ecf8a42526d0-whisker-ca-bundle\") pod \"whisker-fc946f9d8-ws2xc\" (UID: \"f446029d-ede7-451f-86c1-ecf8a42526d0\") " pod="calico-system/whisker-fc946f9d8-ws2xc" Sep 9 05:36:38.506081 kubelet[2729]: I0909 05:36:38.506008 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d2c5372-f70a-4bb7-a7eb-f4b8172296a7-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-6svxj\" (UID: \"4d2c5372-f70a-4bb7-a7eb-f4b8172296a7\") " pod="calico-system/goldmane-54d579b49d-6svxj" Sep 9 05:36:38.506306 kubelet[2729]: I0909 05:36:38.506026 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cb6g\" (UniqueName: \"kubernetes.io/projected/73ac7708-2947-4b1b-befb-e7b4c7e7afc5-kube-api-access-8cb6g\") pod \"coredns-668d6bf9bc-t7jv2\" (UID: \"73ac7708-2947-4b1b-befb-e7b4c7e7afc5\") " pod="kube-system/coredns-668d6bf9bc-t7jv2" Sep 9 05:36:38.506306 kubelet[2729]: I0909 05:36:38.506089 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qljgc\" (UniqueName: \"kubernetes.io/projected/f35fef50-a2f4-447d-b45d-e83823339dd4-kube-api-access-qljgc\") pod \"calico-apiserver-79f7b7478c-6n9zh\" (UID: \"f35fef50-a2f4-447d-b45d-e83823339dd4\") " pod="calico-apiserver/calico-apiserver-79f7b7478c-6n9zh" Sep 9 05:36:38.506306 kubelet[2729]: I0909 05:36:38.506108 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4-tigera-ca-bundle\") pod \"calico-kube-controllers-5cbc46b88c-545l7\" (UID: \"bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4\") " pod="calico-system/calico-kube-controllers-5cbc46b88c-545l7" Sep 9 05:36:38.754591 containerd[1536]: time="2025-09-09T05:36:38.754442967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc946f9d8-ws2xc,Uid:f446029d-ede7-451f-86c1-ecf8a42526d0,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:38.772270 containerd[1536]: time="2025-09-09T05:36:38.771855273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbc46b88c-545l7,Uid:bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:38.787685 containerd[1536]: time="2025-09-09T05:36:38.787608191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f7b7478c-hnm6t,Uid:679a70ba-6cff-4a0a-9a8f-12021cd530ce,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:36:38.968918 containerd[1536]: time="2025-09-09T05:36:38.968784368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6svxj,Uid:4d2c5372-f70a-4bb7-a7eb-f4b8172296a7,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:39.030713 containerd[1536]: time="2025-09-09T05:36:39.030167455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f7b7478c-6n9zh,Uid:f35fef50-a2f4-447d-b45d-e83823339dd4,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:36:39.098812 containerd[1536]: time="2025-09-09T05:36:39.098753431Z" level=error msg="Failed to destroy network for sandbox \"ebc3633a7a5467ef1c86ada294b86b48af4e9bd8c288e6f6682a74425a7c44b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.099312 containerd[1536]: time="2025-09-09T05:36:39.099279295Z" level=error msg="Failed to destroy network for sandbox \"c55347f3f0932aa2866951cea6cc2ef51f0dfd993c1d302a216814463d6234a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.100652 containerd[1536]: time="2025-09-09T05:36:39.100586856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f7b7478c-hnm6t,Uid:679a70ba-6cff-4a0a-9a8f-12021cd530ce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc3633a7a5467ef1c86ada294b86b48af4e9bd8c288e6f6682a74425a7c44b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.103723 kubelet[2729]: E0909 05:36:39.102719 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc3633a7a5467ef1c86ada294b86b48af4e9bd8c288e6f6682a74425a7c44b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.103723 kubelet[2729]: E0909 05:36:39.102827 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc3633a7a5467ef1c86ada294b86b48af4e9bd8c288e6f6682a74425a7c44b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f7b7478c-hnm6t" Sep 9 05:36:39.103723 kubelet[2729]: E0909 05:36:39.102852 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc3633a7a5467ef1c86ada294b86b48af4e9bd8c288e6f6682a74425a7c44b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f7b7478c-hnm6t" Sep 9 05:36:39.104008 kubelet[2729]: E0909 05:36:39.102918 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f7b7478c-hnm6t_calico-apiserver(679a70ba-6cff-4a0a-9a8f-12021cd530ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f7b7478c-hnm6t_calico-apiserver(679a70ba-6cff-4a0a-9a8f-12021cd530ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebc3633a7a5467ef1c86ada294b86b48af4e9bd8c288e6f6682a74425a7c44b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f7b7478c-hnm6t" podUID="679a70ba-6cff-4a0a-9a8f-12021cd530ce" Sep 9 05:36:39.106215 containerd[1536]: time="2025-09-09T05:36:39.106020363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fc946f9d8-ws2xc,Uid:f446029d-ede7-451f-86c1-ecf8a42526d0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55347f3f0932aa2866951cea6cc2ef51f0dfd993c1d302a216814463d6234a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.124760 kubelet[2729]: E0909 05:36:39.124461 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55347f3f0932aa2866951cea6cc2ef51f0dfd993c1d302a216814463d6234a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.124760 kubelet[2729]: E0909 05:36:39.124711 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55347f3f0932aa2866951cea6cc2ef51f0dfd993c1d302a216814463d6234a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fc946f9d8-ws2xc" Sep 9 05:36:39.125997 kubelet[2729]: E0909 05:36:39.125589 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55347f3f0932aa2866951cea6cc2ef51f0dfd993c1d302a216814463d6234a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-fc946f9d8-ws2xc" Sep 9 05:36:39.125997 kubelet[2729]: E0909 05:36:39.125720 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-fc946f9d8-ws2xc_calico-system(f446029d-ede7-451f-86c1-ecf8a42526d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-fc946f9d8-ws2xc_calico-system(f446029d-ede7-451f-86c1-ecf8a42526d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c55347f3f0932aa2866951cea6cc2ef51f0dfd993c1d302a216814463d6234a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-fc946f9d8-ws2xc" podUID="f446029d-ede7-451f-86c1-ecf8a42526d0" Sep 9 05:36:39.135043 systemd[1]: Created slice kubepods-besteffort-pod5cc829cd_94e6_4805_83c2_6c73a3a71220.slice - libcontainer container kubepods-besteffort-pod5cc829cd_94e6_4805_83c2_6c73a3a71220.slice. Sep 9 05:36:39.145601 containerd[1536]: time="2025-09-09T05:36:39.144785361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9q6kr,Uid:5cc829cd-94e6-4805-83c2-6c73a3a71220,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:39.158791 containerd[1536]: time="2025-09-09T05:36:39.158728409Z" level=error msg="Failed to destroy network for sandbox \"f5d21f4b03c89ecf59c093c447372d26a5299619e0a05384d087ccd1ed1e8566\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.162222 containerd[1536]: time="2025-09-09T05:36:39.162103083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbc46b88c-545l7,Uid:bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d21f4b03c89ecf59c093c447372d26a5299619e0a05384d087ccd1ed1e8566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.163983 kubelet[2729]: E0909 05:36:39.163760 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d21f4b03c89ecf59c093c447372d26a5299619e0a05384d087ccd1ed1e8566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.164739 kubelet[2729]: E0909 05:36:39.164247 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d21f4b03c89ecf59c093c447372d26a5299619e0a05384d087ccd1ed1e8566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbc46b88c-545l7" Sep 9 05:36:39.164739 kubelet[2729]: E0909 05:36:39.164596 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d21f4b03c89ecf59c093c447372d26a5299619e0a05384d087ccd1ed1e8566\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbc46b88c-545l7" Sep 9 05:36:39.164739 kubelet[2729]: E0909 05:36:39.164680 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbc46b88c-545l7_calico-system(bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbc46b88c-545l7_calico-system(bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5d21f4b03c89ecf59c093c447372d26a5299619e0a05384d087ccd1ed1e8566\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbc46b88c-545l7" podUID="bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4" Sep 9 05:36:39.290658 containerd[1536]: time="2025-09-09T05:36:39.290495784Z" level=error msg="Failed to destroy network for sandbox \"d9cf9a6da449343a3046d9621484d4caa821a8b20183e3dc2352788430e9ce2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.296209 systemd[1]: run-netns-cni\x2dbad0e1b4\x2d4f85\x2d4af7\x2d3120\x2df53fa9da84e8.mount: Deactivated successfully. Sep 9 05:36:39.297474 containerd[1536]: time="2025-09-09T05:36:39.296250300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f7b7478c-6n9zh,Uid:f35fef50-a2f4-447d-b45d-e83823339dd4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cf9a6da449343a3046d9621484d4caa821a8b20183e3dc2352788430e9ce2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.297839 kubelet[2729]: E0909 05:36:39.296952 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cf9a6da449343a3046d9621484d4caa821a8b20183e3dc2352788430e9ce2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.297839 kubelet[2729]: E0909 05:36:39.297764 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cf9a6da449343a3046d9621484d4caa821a8b20183e3dc2352788430e9ce2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f7b7478c-6n9zh" Sep 9 05:36:39.297839 kubelet[2729]: E0909 05:36:39.297809 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9cf9a6da449343a3046d9621484d4caa821a8b20183e3dc2352788430e9ce2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79f7b7478c-6n9zh" Sep 9 05:36:39.298763 kubelet[2729]: E0909 05:36:39.297895 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79f7b7478c-6n9zh_calico-apiserver(f35fef50-a2f4-447d-b45d-e83823339dd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79f7b7478c-6n9zh_calico-apiserver(f35fef50-a2f4-447d-b45d-e83823339dd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9cf9a6da449343a3046d9621484d4caa821a8b20183e3dc2352788430e9ce2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79f7b7478c-6n9zh" podUID="f35fef50-a2f4-447d-b45d-e83823339dd4" Sep 9 05:36:39.312162 containerd[1536]: time="2025-09-09T05:36:39.312089000Z" level=error msg="Failed to destroy network for sandbox \"5ba27fe182df91632a907b379ccc50f6bbca527946f53ffb374309b3bd998cf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.318751 systemd[1]: run-netns-cni\x2df2b89410\x2dc806\x2d10cc\x2d16e0\x2dc2f70bbd3411.mount: Deactivated successfully. Sep 9 05:36:39.320088 containerd[1536]: time="2025-09-09T05:36:39.316837416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6svxj,Uid:4d2c5372-f70a-4bb7-a7eb-f4b8172296a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ba27fe182df91632a907b379ccc50f6bbca527946f53ffb374309b3bd998cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.321459 kubelet[2729]: E0909 05:36:39.321406 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ba27fe182df91632a907b379ccc50f6bbca527946f53ffb374309b3bd998cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.321905 kubelet[2729]: E0909 05:36:39.321867 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ba27fe182df91632a907b379ccc50f6bbca527946f53ffb374309b3bd998cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6svxj" Sep 9 05:36:39.323366 kubelet[2729]: E0909 05:36:39.322031 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ba27fe182df91632a907b379ccc50f6bbca527946f53ffb374309b3bd998cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-6svxj" Sep 9 05:36:39.323366 kubelet[2729]: E0909 05:36:39.322108 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-6svxj_calico-system(4d2c5372-f70a-4bb7-a7eb-f4b8172296a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-6svxj_calico-system(4d2c5372-f70a-4bb7-a7eb-f4b8172296a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ba27fe182df91632a907b379ccc50f6bbca527946f53ffb374309b3bd998cf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-6svxj" podUID="4d2c5372-f70a-4bb7-a7eb-f4b8172296a7" Sep 9 05:36:39.346309 containerd[1536]: time="2025-09-09T05:36:39.346219466Z" level=error msg="Failed to destroy network for sandbox \"9ce88e344d8a82cf4885fe1765effafb1f274feee711ed57d0f8f2030002cde1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.349412 containerd[1536]: time="2025-09-09T05:36:39.349341044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9q6kr,Uid:5cc829cd-94e6-4805-83c2-6c73a3a71220,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ce88e344d8a82cf4885fe1765effafb1f274feee711ed57d0f8f2030002cde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.350355 systemd[1]: run-netns-cni\x2d0301c62f\x2d3d36\x2ddd55\x2dd467\x2d2ea277c99702.mount: Deactivated successfully. Sep 9 05:36:39.351826 kubelet[2729]: E0909 05:36:39.351494 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ce88e344d8a82cf4885fe1765effafb1f274feee711ed57d0f8f2030002cde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:39.352839 kubelet[2729]: E0909 05:36:39.352628 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ce88e344d8a82cf4885fe1765effafb1f274feee711ed57d0f8f2030002cde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9q6kr" Sep 9 05:36:39.353328 kubelet[2729]: E0909 05:36:39.353274 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ce88e344d8a82cf4885fe1765effafb1f274feee711ed57d0f8f2030002cde1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9q6kr" Sep 9 05:36:39.354294 kubelet[2729]: E0909 05:36:39.353382 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9q6kr_calico-system(5cc829cd-94e6-4805-83c2-6c73a3a71220)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9q6kr_calico-system(5cc829cd-94e6-4805-83c2-6c73a3a71220)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ce88e344d8a82cf4885fe1765effafb1f274feee711ed57d0f8f2030002cde1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9q6kr" podUID="5cc829cd-94e6-4805-83c2-6c73a3a71220" Sep 9 05:36:39.512948 kubelet[2729]: E0909 05:36:39.512874 2729 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:39.513262 kubelet[2729]: E0909 05:36:39.513021 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fac6e703-38ee-41ec-b92c-fe35196c41bc-config-volume podName:fac6e703-38ee-41ec-b92c-fe35196c41bc nodeName:}" failed. No retries permitted until 2025-09-09 05:36:40.012993576 +0000 UTC m=+38.195490270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fac6e703-38ee-41ec-b92c-fe35196c41bc-config-volume") pod "coredns-668d6bf9bc-6p8l6" (UID: "fac6e703-38ee-41ec-b92c-fe35196c41bc") : failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:39.613049 kubelet[2729]: E0909 05:36:39.612771 2729 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:39.613049 kubelet[2729]: E0909 05:36:39.612892 2729 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73ac7708-2947-4b1b-befb-e7b4c7e7afc5-config-volume podName:73ac7708-2947-4b1b-befb-e7b4c7e7afc5 nodeName:}" failed. No retries permitted until 2025-09-09 05:36:40.112869529 +0000 UTC m=+38.295366220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/73ac7708-2947-4b1b-befb-e7b4c7e7afc5-config-volume") pod "coredns-668d6bf9bc-t7jv2" (UID: "73ac7708-2947-4b1b-befb-e7b4c7e7afc5") : failed to sync configmap cache: timed out waiting for the condition Sep 9 05:36:40.146245 kubelet[2729]: E0909 05:36:40.146204 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:40.148082 containerd[1536]: time="2025-09-09T05:36:40.148015206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6p8l6,Uid:fac6e703-38ee-41ec-b92c-fe35196c41bc,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:40.187923 kubelet[2729]: E0909 05:36:40.186950 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:40.193411 containerd[1536]: time="2025-09-09T05:36:40.190778341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t7jv2,Uid:73ac7708-2947-4b1b-befb-e7b4c7e7afc5,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:40.309515 containerd[1536]: time="2025-09-09T05:36:40.309446417Z" level=error msg="Failed to destroy network for sandbox \"a11929844a25aab959b9c653a1a6c1d91068755c018e340a7d2bd379f4744313\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:40.312736 containerd[1536]: time="2025-09-09T05:36:40.312628931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6p8l6,Uid:fac6e703-38ee-41ec-b92c-fe35196c41bc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a11929844a25aab959b9c653a1a6c1d91068755c018e340a7d2bd379f4744313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:40.316146 kubelet[2729]: E0909 05:36:40.313924 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a11929844a25aab959b9c653a1a6c1d91068755c018e340a7d2bd379f4744313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:40.316146 kubelet[2729]: E0909 05:36:40.313988 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a11929844a25aab959b9c653a1a6c1d91068755c018e340a7d2bd379f4744313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6p8l6" Sep 9 05:36:40.316146 kubelet[2729]: E0909 05:36:40.314013 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a11929844a25aab959b9c653a1a6c1d91068755c018e340a7d2bd379f4744313\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6p8l6" Sep 9 05:36:40.315356 systemd[1]: run-netns-cni\x2dbedc851a\x2dfb36\x2d6b9b\x2db76d\x2d2724ac235d8a.mount: Deactivated successfully. Sep 9 05:36:40.318151 kubelet[2729]: E0909 05:36:40.314064 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6p8l6_kube-system(fac6e703-38ee-41ec-b92c-fe35196c41bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6p8l6_kube-system(fac6e703-38ee-41ec-b92c-fe35196c41bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a11929844a25aab959b9c653a1a6c1d91068755c018e340a7d2bd379f4744313\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6p8l6" podUID="fac6e703-38ee-41ec-b92c-fe35196c41bc" Sep 9 05:36:40.344193 containerd[1536]: time="2025-09-09T05:36:40.344116321Z" level=error msg="Failed to destroy network for sandbox \"44191e2cb25f04ebd80417db579151ffe591b41fe80f82a3e6e21f4d8cb66e45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:40.348147 systemd[1]: run-netns-cni\x2d67c319e1\x2db7fc\x2d9e53\x2d8fab\x2de8f9a1d77c7d.mount: Deactivated successfully. Sep 9 05:36:40.350589 containerd[1536]: time="2025-09-09T05:36:40.349811271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t7jv2,Uid:73ac7708-2947-4b1b-befb-e7b4c7e7afc5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44191e2cb25f04ebd80417db579151ffe591b41fe80f82a3e6e21f4d8cb66e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:40.350840 kubelet[2729]: E0909 05:36:40.350757 2729 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44191e2cb25f04ebd80417db579151ffe591b41fe80f82a3e6e21f4d8cb66e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 05:36:40.350840 kubelet[2729]: E0909 05:36:40.350827 2729 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44191e2cb25f04ebd80417db579151ffe591b41fe80f82a3e6e21f4d8cb66e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t7jv2" Sep 9 05:36:40.350988 kubelet[2729]: E0909 05:36:40.350866 2729 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44191e2cb25f04ebd80417db579151ffe591b41fe80f82a3e6e21f4d8cb66e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t7jv2" Sep 9 05:36:40.350988 kubelet[2729]: E0909 05:36:40.350910 2729 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t7jv2_kube-system(73ac7708-2947-4b1b-befb-e7b4c7e7afc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t7jv2_kube-system(73ac7708-2947-4b1b-befb-e7b4c7e7afc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44191e2cb25f04ebd80417db579151ffe591b41fe80f82a3e6e21f4d8cb66e45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t7jv2" podUID="73ac7708-2947-4b1b-befb-e7b4c7e7afc5" Sep 9 05:36:45.665026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545603237.mount: Deactivated successfully. Sep 9 05:36:45.870131 containerd[1536]: time="2025-09-09T05:36:45.868275432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 05:36:45.974087 containerd[1536]: time="2025-09-09T05:36:45.973893410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:46.037427 containerd[1536]: time="2025-09-09T05:36:46.037352496Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:46.061274 containerd[1536]: time="2025-09-09T05:36:46.061188188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:46.062932 containerd[1536]: time="2025-09-09T05:36:46.062770964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.650866862s" Sep 9 05:36:46.062932 containerd[1536]: time="2025-09-09T05:36:46.062842182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 05:36:46.116012 containerd[1536]: time="2025-09-09T05:36:46.115942126Z" level=info msg="CreateContainer within sandbox \"b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 05:36:46.179240 containerd[1536]: time="2025-09-09T05:36:46.178788468Z" level=info msg="Container ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:46.179613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865984102.mount: Deactivated successfully. Sep 9 05:36:46.225588 containerd[1536]: time="2025-09-09T05:36:46.224278821Z" level=info msg="CreateContainer within sandbox \"b527a7a3f166969f7d355088abdfdfdd0ea603b7a7fc07c8a4994ee7ec9bf1a8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82\"" Sep 9 05:36:46.228899 containerd[1536]: time="2025-09-09T05:36:46.228693002Z" level=info msg="StartContainer for \"ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82\"" Sep 9 05:36:46.240665 containerd[1536]: time="2025-09-09T05:36:46.240607142Z" level=info msg="connecting to shim ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82" address="unix:///run/containerd/s/bb82ded84243bc103cf9750e653d77156e947658821bec420964caa92d2b4cd4" protocol=ttrpc version=3 Sep 9 05:36:46.430185 systemd[1]: Started cri-containerd-ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82.scope - libcontainer container ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82. Sep 9 05:36:46.537596 containerd[1536]: time="2025-09-09T05:36:46.534145877Z" level=info msg="StartContainer for \"ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82\" returns successfully" Sep 9 05:36:46.688212 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 05:36:46.689230 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 05:36:46.987623 kubelet[2729]: I0909 05:36:46.987449 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbwm2\" (UniqueName: \"kubernetes.io/projected/f446029d-ede7-451f-86c1-ecf8a42526d0-kube-api-access-kbwm2\") pod \"f446029d-ede7-451f-86c1-ecf8a42526d0\" (UID: \"f446029d-ede7-451f-86c1-ecf8a42526d0\") " Sep 9 05:36:46.987623 kubelet[2729]: I0909 05:36:46.987498 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f446029d-ede7-451f-86c1-ecf8a42526d0-whisker-ca-bundle\") pod \"f446029d-ede7-451f-86c1-ecf8a42526d0\" (UID: \"f446029d-ede7-451f-86c1-ecf8a42526d0\") " Sep 9 05:36:46.987623 kubelet[2729]: I0909 05:36:46.987518 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f446029d-ede7-451f-86c1-ecf8a42526d0-whisker-backend-key-pair\") pod \"f446029d-ede7-451f-86c1-ecf8a42526d0\" (UID: \"f446029d-ede7-451f-86c1-ecf8a42526d0\") " Sep 9 05:36:46.993910 kubelet[2729]: I0909 05:36:46.993838 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f446029d-ede7-451f-86c1-ecf8a42526d0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f446029d-ede7-451f-86c1-ecf8a42526d0" (UID: "f446029d-ede7-451f-86c1-ecf8a42526d0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 05:36:47.005870 systemd[1]: var-lib-kubelet-pods-f446029d\x2dede7\x2d451f\x2d86c1\x2decf8a42526d0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 05:36:47.015182 kubelet[2729]: I0909 05:36:47.014656 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f446029d-ede7-451f-86c1-ecf8a42526d0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f446029d-ede7-451f-86c1-ecf8a42526d0" (UID: "f446029d-ede7-451f-86c1-ecf8a42526d0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 05:36:47.015821 kubelet[2729]: I0909 05:36:47.015754 2729 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f446029d-ede7-451f-86c1-ecf8a42526d0-kube-api-access-kbwm2" (OuterVolumeSpecName: "kube-api-access-kbwm2") pod "f446029d-ede7-451f-86c1-ecf8a42526d0" (UID: "f446029d-ede7-451f-86c1-ecf8a42526d0"). InnerVolumeSpecName "kube-api-access-kbwm2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 05:36:47.015822 systemd[1]: var-lib-kubelet-pods-f446029d\x2dede7\x2d451f\x2d86c1\x2decf8a42526d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkbwm2.mount: Deactivated successfully. Sep 9 05:36:47.088319 kubelet[2729]: I0909 05:36:47.088261 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f446029d-ede7-451f-86c1-ecf8a42526d0-whisker-backend-key-pair\") on node \"ci-4452.0.0-n-41a4a07365\" DevicePath \"\"" Sep 9 05:36:47.088319 kubelet[2729]: I0909 05:36:47.088313 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kbwm2\" (UniqueName: \"kubernetes.io/projected/f446029d-ede7-451f-86c1-ecf8a42526d0-kube-api-access-kbwm2\") on node \"ci-4452.0.0-n-41a4a07365\" DevicePath \"\"" Sep 9 05:36:47.090099 kubelet[2729]: I0909 05:36:47.088361 2729 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f446029d-ede7-451f-86c1-ecf8a42526d0-whisker-ca-bundle\") on node \"ci-4452.0.0-n-41a4a07365\" DevicePath \"\"" Sep 9 05:36:47.483794 systemd[1]: Removed slice kubepods-besteffort-podf446029d_ede7_451f_86c1_ecf8a42526d0.slice - libcontainer container kubepods-besteffort-podf446029d_ede7_451f_86c1_ecf8a42526d0.slice. Sep 9 05:36:47.513457 kubelet[2729]: I0909 05:36:47.512695 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7czt5" podStartSLOduration=3.424502097 podStartE2EDuration="21.512662397s" podCreationTimestamp="2025-09-09 05:36:26 +0000 UTC" firstStartedPulling="2025-09-09 05:36:28.003317718 +0000 UTC m=+26.185814436" lastFinishedPulling="2025-09-09 05:36:46.09147804 +0000 UTC m=+44.273974736" observedRunningTime="2025-09-09 05:36:47.508093247 +0000 UTC m=+45.690589967" watchObservedRunningTime="2025-09-09 05:36:47.512662397 +0000 UTC m=+45.695159122" Sep 9 05:36:47.658256 systemd[1]: Created slice kubepods-besteffort-podb6c8920a_1a2d_4f75_ae16_4a56b5ed558a.slice - libcontainer container kubepods-besteffort-podb6c8920a_1a2d_4f75_ae16_4a56b5ed558a.slice. Sep 9 05:36:47.702490 kubelet[2729]: I0909 05:36:47.702321 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrpr\" (UniqueName: \"kubernetes.io/projected/b6c8920a-1a2d-4f75-ae16-4a56b5ed558a-kube-api-access-fkrpr\") pod \"whisker-7ff649d8c5-kdrgr\" (UID: \"b6c8920a-1a2d-4f75-ae16-4a56b5ed558a\") " pod="calico-system/whisker-7ff649d8c5-kdrgr" Sep 9 05:36:47.702852 kubelet[2729]: I0909 05:36:47.702681 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c8920a-1a2d-4f75-ae16-4a56b5ed558a-whisker-ca-bundle\") pod \"whisker-7ff649d8c5-kdrgr\" (UID: \"b6c8920a-1a2d-4f75-ae16-4a56b5ed558a\") " pod="calico-system/whisker-7ff649d8c5-kdrgr" Sep 9 05:36:47.702852 kubelet[2729]: I0909 05:36:47.702831 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b6c8920a-1a2d-4f75-ae16-4a56b5ed558a-whisker-backend-key-pair\") pod \"whisker-7ff649d8c5-kdrgr\" (UID: \"b6c8920a-1a2d-4f75-ae16-4a56b5ed558a\") " pod="calico-system/whisker-7ff649d8c5-kdrgr" Sep 9 05:36:47.792623 containerd[1536]: time="2025-09-09T05:36:47.792516512Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82\" id:\"83f9ee48ff929d86ed51875d9c44ea32b765d7147a9d63ba89802dfea13b1755\" pid:3811 exit_status:1 exited_at:{seconds:1757396207 nanos:791994242}" Sep 9 05:36:47.971586 containerd[1536]: time="2025-09-09T05:36:47.971484717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff649d8c5-kdrgr,Uid:b6c8920a-1a2d-4f75-ae16-4a56b5ed558a,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:48.122620 kubelet[2729]: I0909 05:36:48.122088 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f446029d-ede7-451f-86c1-ecf8a42526d0" path="/var/lib/kubelet/pods/f446029d-ede7-451f-86c1-ecf8a42526d0/volumes" Sep 9 05:36:48.402393 systemd-networkd[1447]: cali36985fa2ad0: Link UP Sep 9 05:36:48.403740 systemd-networkd[1447]: cali36985fa2ad0: Gained carrier Sep 9 05:36:48.457279 containerd[1536]: 2025-09-09 05:36:48.019 [INFO][3825] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 05:36:48.457279 containerd[1536]: 2025-09-09 05:36:48.098 [INFO][3825] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0 whisker-7ff649d8c5- calico-system b6c8920a-1a2d-4f75-ae16-4a56b5ed558a 908 0 2025-09-09 05:36:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7ff649d8c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4452.0.0-n-41a4a07365 whisker-7ff649d8c5-kdrgr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali36985fa2ad0 [] [] }} ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Namespace="calico-system" Pod="whisker-7ff649d8c5-kdrgr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-" Sep 9 05:36:48.457279 containerd[1536]: 2025-09-09 05:36:48.099 [INFO][3825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Namespace="calico-system" Pod="whisker-7ff649d8c5-kdrgr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" Sep 9 05:36:48.457279 containerd[1536]: 2025-09-09 05:36:48.292 [INFO][3837] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" HandleID="k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Workload="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.293 [INFO][3837] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" HandleID="k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Workload="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-41a4a07365", "pod":"whisker-7ff649d8c5-kdrgr", "timestamp":"2025-09-09 05:36:48.292308652 +0000 UTC"}, Hostname:"ci-4452.0.0-n-41a4a07365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.293 [INFO][3837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.294 [INFO][3837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.294 [INFO][3837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-41a4a07365' Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.308 [INFO][3837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.326 [INFO][3837] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.336 [INFO][3837] ipam/ipam.go 511: Trying affinity for 192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.341 [INFO][3837] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.460781 containerd[1536]: 2025-09-09 05:36:48.347 [INFO][3837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.461157 containerd[1536]: 2025-09-09 05:36:48.348 [INFO][3837] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.192/26 handle="k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.461157 containerd[1536]: 2025-09-09 05:36:48.351 [INFO][3837] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250 Sep 9 05:36:48.461157 containerd[1536]: 2025-09-09 05:36:48.360 [INFO][3837] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.192/26 handle="k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.461157 containerd[1536]: 2025-09-09 05:36:48.373 [INFO][3837] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.193/26] block=192.168.66.192/26 handle="k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.461157 containerd[1536]: 2025-09-09 05:36:48.373 [INFO][3837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.193/26] handle="k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:48.461157 containerd[1536]: 2025-09-09 05:36:48.373 [INFO][3837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:48.461157 containerd[1536]: 2025-09-09 05:36:48.373 [INFO][3837] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.193/26] IPv6=[] ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" HandleID="k8s-pod-network.f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Workload="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" Sep 9 05:36:48.461314 containerd[1536]: 2025-09-09 05:36:48.382 [INFO][3825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Namespace="calico-system" Pod="whisker-7ff649d8c5-kdrgr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0", GenerateName:"whisker-7ff649d8c5-", Namespace:"calico-system", SelfLink:"", UID:"b6c8920a-1a2d-4f75-ae16-4a56b5ed558a", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ff649d8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"", Pod:"whisker-7ff649d8c5-kdrgr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali36985fa2ad0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:48.461314 containerd[1536]: 2025-09-09 05:36:48.383 [INFO][3825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.193/32] ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Namespace="calico-system" Pod="whisker-7ff649d8c5-kdrgr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" Sep 9 05:36:48.461407 containerd[1536]: 2025-09-09 05:36:48.383 [INFO][3825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36985fa2ad0 ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Namespace="calico-system" Pod="whisker-7ff649d8c5-kdrgr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" Sep 9 05:36:48.461407 containerd[1536]: 2025-09-09 05:36:48.405 [INFO][3825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Namespace="calico-system" Pod="whisker-7ff649d8c5-kdrgr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" Sep 9 05:36:48.461461 containerd[1536]: 2025-09-09 05:36:48.407 [INFO][3825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Namespace="calico-system" Pod="whisker-7ff649d8c5-kdrgr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0", GenerateName:"whisker-7ff649d8c5-", Namespace:"calico-system", SelfLink:"", UID:"b6c8920a-1a2d-4f75-ae16-4a56b5ed558a", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ff649d8c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250", Pod:"whisker-7ff649d8c5-kdrgr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.66.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali36985fa2ad0", MAC:"7e:d1:6c:89:01:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:48.461524 containerd[1536]: 2025-09-09 05:36:48.446 [INFO][3825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" Namespace="calico-system" Pod="whisker-7ff649d8c5-kdrgr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-whisker--7ff649d8c5--kdrgr-eth0" Sep 9 05:36:48.706920 containerd[1536]: time="2025-09-09T05:36:48.705817870Z" level=info msg="connecting to shim f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250" address="unix:///run/containerd/s/03e05dad8026631e968521da846d69faca0f2c1b67c0a95addc94d72aa9a6990" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:48.773935 systemd[1]: Started cri-containerd-f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250.scope - libcontainer container f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250. Sep 9 05:36:48.936739 containerd[1536]: time="2025-09-09T05:36:48.936675757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff649d8c5-kdrgr,Uid:b6c8920a-1a2d-4f75-ae16-4a56b5ed558a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250\"" Sep 9 05:36:48.943760 containerd[1536]: time="2025-09-09T05:36:48.943337161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 05:36:49.287420 containerd[1536]: time="2025-09-09T05:36:49.287348160Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82\" id:\"e5f1c9cc92c6a0925dd8cdda5a02f7aa9400106e803286968a4fbee6e0ac9e07\" pid:3912 exit_status:1 exited_at:{seconds:1757396209 nanos:285529338}" Sep 9 05:36:49.531798 systemd-networkd[1447]: cali36985fa2ad0: Gained IPv6LL Sep 9 05:36:49.726706 systemd-networkd[1447]: vxlan.calico: Link UP Sep 9 05:36:49.726725 systemd-networkd[1447]: vxlan.calico: Gained carrier Sep 9 05:36:50.119101 containerd[1536]: time="2025-09-09T05:36:50.119028480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f7b7478c-hnm6t,Uid:679a70ba-6cff-4a0a-9a8f-12021cd530ce,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:36:50.456701 systemd-networkd[1447]: calicaafc8e986f: Link UP Sep 9 05:36:50.458513 systemd-networkd[1447]: calicaafc8e986f: Gained carrier Sep 9 05:36:50.492658 containerd[1536]: 2025-09-09 05:36:50.236 [INFO][4103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0 calico-apiserver-79f7b7478c- calico-apiserver 679a70ba-6cff-4a0a-9a8f-12021cd530ce 837 0 2025-09-09 05:36:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f7b7478c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452.0.0-n-41a4a07365 calico-apiserver-79f7b7478c-hnm6t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicaafc8e986f [] [] }} ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-hnm6t" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-" Sep 9 05:36:50.492658 containerd[1536]: 2025-09-09 05:36:50.236 [INFO][4103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-hnm6t" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" Sep 9 05:36:50.492658 containerd[1536]: 2025-09-09 05:36:50.340 [INFO][4127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" HandleID="k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.340 [INFO][4127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" HandleID="k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bf1f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452.0.0-n-41a4a07365", "pod":"calico-apiserver-79f7b7478c-hnm6t", "timestamp":"2025-09-09 05:36:50.340599914 +0000 UTC"}, Hostname:"ci-4452.0.0-n-41a4a07365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.341 [INFO][4127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.341 [INFO][4127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.341 [INFO][4127] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-41a4a07365' Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.364 [INFO][4127] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.380 [INFO][4127] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.392 [INFO][4127] ipam/ipam.go 511: Trying affinity for 192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.398 [INFO][4127] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493068 containerd[1536]: 2025-09-09 05:36:50.405 [INFO][4127] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493438 containerd[1536]: 2025-09-09 05:36:50.405 [INFO][4127] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.192/26 handle="k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493438 containerd[1536]: 2025-09-09 05:36:50.410 [INFO][4127] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62 Sep 9 05:36:50.493438 containerd[1536]: 2025-09-09 05:36:50.419 [INFO][4127] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.192/26 handle="k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493438 containerd[1536]: 2025-09-09 05:36:50.442 [INFO][4127] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.194/26] block=192.168.66.192/26 handle="k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493438 containerd[1536]: 2025-09-09 05:36:50.442 [INFO][4127] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.194/26] handle="k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:50.493438 containerd[1536]: 2025-09-09 05:36:50.442 [INFO][4127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:50.493438 containerd[1536]: 2025-09-09 05:36:50.442 [INFO][4127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.194/26] IPv6=[] ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" HandleID="k8s-pod-network.4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" Sep 9 05:36:50.494649 containerd[1536]: 2025-09-09 05:36:50.451 [INFO][4103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-hnm6t" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0", GenerateName:"calico-apiserver-79f7b7478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"679a70ba-6cff-4a0a-9a8f-12021cd530ce", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f7b7478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"", Pod:"calico-apiserver-79f7b7478c-hnm6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicaafc8e986f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:50.494758 containerd[1536]: 2025-09-09 05:36:50.451 [INFO][4103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.194/32] ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-hnm6t" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" Sep 9 05:36:50.494758 containerd[1536]: 2025-09-09 05:36:50.451 [INFO][4103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicaafc8e986f ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-hnm6t" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" Sep 9 05:36:50.494758 containerd[1536]: 2025-09-09 05:36:50.459 [INFO][4103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-hnm6t" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" Sep 9 05:36:50.494826 containerd[1536]: 2025-09-09 05:36:50.460 [INFO][4103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-hnm6t" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0", GenerateName:"calico-apiserver-79f7b7478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"679a70ba-6cff-4a0a-9a8f-12021cd530ce", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f7b7478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62", Pod:"calico-apiserver-79f7b7478c-hnm6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicaafc8e986f", MAC:"5a:c0:44:da:0f:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:50.494889 containerd[1536]: 2025-09-09 05:36:50.486 [INFO][4103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-hnm6t" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--hnm6t-eth0" Sep 9 05:36:50.593095 containerd[1536]: time="2025-09-09T05:36:50.592908776Z" level=info msg="connecting to shim 4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62" address="unix:///run/containerd/s/10cbc98837cabcc65a204d5073c097e17c538ec92992cee88c69dacba9613385" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:50.666224 systemd[1]: Started cri-containerd-4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62.scope - libcontainer container 4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62. Sep 9 05:36:50.802697 containerd[1536]: time="2025-09-09T05:36:50.802532724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:50.806454 containerd[1536]: time="2025-09-09T05:36:50.806374657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 05:36:50.808782 containerd[1536]: time="2025-09-09T05:36:50.808497333Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:50.818218 containerd[1536]: time="2025-09-09T05:36:50.818105003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f7b7478c-hnm6t,Uid:679a70ba-6cff-4a0a-9a8f-12021cd530ce,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62\"" Sep 9 05:36:50.819680 containerd[1536]: time="2025-09-09T05:36:50.819170824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:50.820272 containerd[1536]: time="2025-09-09T05:36:50.820228468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.876823575s" Sep 9 05:36:50.820683 containerd[1536]: time="2025-09-09T05:36:50.820520767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 05:36:50.825654 containerd[1536]: time="2025-09-09T05:36:50.825142170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:36:50.825845 containerd[1536]: time="2025-09-09T05:36:50.825702406Z" level=info msg="CreateContainer within sandbox \"f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 05:36:50.842194 containerd[1536]: time="2025-09-09T05:36:50.842117869Z" level=info msg="Container 4b8b3d70d739535f703de380f2d3f660772f488829f19b7a2d25917d2d84ea4a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:50.854235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110975601.mount: Deactivated successfully. Sep 9 05:36:50.862172 containerd[1536]: time="2025-09-09T05:36:50.862113578Z" level=info msg="CreateContainer within sandbox \"f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4b8b3d70d739535f703de380f2d3f660772f488829f19b7a2d25917d2d84ea4a\"" Sep 9 05:36:50.863762 containerd[1536]: time="2025-09-09T05:36:50.863267654Z" level=info msg="StartContainer for \"4b8b3d70d739535f703de380f2d3f660772f488829f19b7a2d25917d2d84ea4a\"" Sep 9 05:36:50.866399 containerd[1536]: time="2025-09-09T05:36:50.866312630Z" level=info msg="connecting to shim 4b8b3d70d739535f703de380f2d3f660772f488829f19b7a2d25917d2d84ea4a" address="unix:///run/containerd/s/03e05dad8026631e968521da846d69faca0f2c1b67c0a95addc94d72aa9a6990" protocol=ttrpc version=3 Sep 9 05:36:50.899988 systemd[1]: Started cri-containerd-4b8b3d70d739535f703de380f2d3f660772f488829f19b7a2d25917d2d84ea4a.scope - libcontainer container 4b8b3d70d739535f703de380f2d3f660772f488829f19b7a2d25917d2d84ea4a. Sep 9 05:36:50.992045 containerd[1536]: time="2025-09-09T05:36:50.991963152Z" level=info msg="StartContainer for \"4b8b3d70d739535f703de380f2d3f660772f488829f19b7a2d25917d2d84ea4a\" returns successfully" Sep 9 05:36:51.119267 containerd[1536]: time="2025-09-09T05:36:51.119088609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f7b7478c-6n9zh,Uid:f35fef50-a2f4-447d-b45d-e83823339dd4,Namespace:calico-apiserver,Attempt:0,}" Sep 9 05:36:51.259985 systemd-networkd[1447]: vxlan.calico: Gained IPv6LL Sep 9 05:36:51.374237 systemd-networkd[1447]: caliea8708c0a78: Link UP Sep 9 05:36:51.376167 systemd-networkd[1447]: caliea8708c0a78: Gained carrier Sep 9 05:36:51.409340 containerd[1536]: 2025-09-09 05:36:51.188 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0 calico-apiserver-79f7b7478c- calico-apiserver f35fef50-a2f4-447d-b45d-e83823339dd4 840 0 2025-09-09 05:36:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79f7b7478c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4452.0.0-n-41a4a07365 calico-apiserver-79f7b7478c-6n9zh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliea8708c0a78 [] [] }} ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-6n9zh" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-" Sep 9 05:36:51.409340 containerd[1536]: 2025-09-09 05:36:51.188 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-6n9zh" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" Sep 9 05:36:51.409340 containerd[1536]: 2025-09-09 05:36:51.247 [INFO][4237] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" HandleID="k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.247 [INFO][4237] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" HandleID="k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4452.0.0-n-41a4a07365", "pod":"calico-apiserver-79f7b7478c-6n9zh", "timestamp":"2025-09-09 05:36:51.247409334 +0000 UTC"}, Hostname:"ci-4452.0.0-n-41a4a07365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.248 [INFO][4237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.248 [INFO][4237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.248 [INFO][4237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-41a4a07365' Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.282 [INFO][4237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.306 [INFO][4237] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.329 [INFO][4237] ipam/ipam.go 511: Trying affinity for 192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.335 [INFO][4237] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411233 containerd[1536]: 2025-09-09 05:36:51.341 [INFO][4237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411919 containerd[1536]: 2025-09-09 05:36:51.341 [INFO][4237] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.192/26 handle="k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411919 containerd[1536]: 2025-09-09 05:36:51.345 [INFO][4237] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc Sep 9 05:36:51.411919 containerd[1536]: 2025-09-09 05:36:51.352 [INFO][4237] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.192/26 handle="k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411919 containerd[1536]: 2025-09-09 05:36:51.365 [INFO][4237] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.195/26] block=192.168.66.192/26 handle="k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411919 containerd[1536]: 2025-09-09 05:36:51.365 [INFO][4237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.195/26] handle="k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:51.411919 containerd[1536]: 2025-09-09 05:36:51.365 [INFO][4237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:51.411919 containerd[1536]: 2025-09-09 05:36:51.365 [INFO][4237] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.195/26] IPv6=[] ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" HandleID="k8s-pod-network.919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" Sep 9 05:36:51.412118 containerd[1536]: 2025-09-09 05:36:51.369 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-6n9zh" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0", GenerateName:"calico-apiserver-79f7b7478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f35fef50-a2f4-447d-b45d-e83823339dd4", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f7b7478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"", Pod:"calico-apiserver-79f7b7478c-6n9zh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea8708c0a78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:51.412200 containerd[1536]: 2025-09-09 05:36:51.369 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.195/32] ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-6n9zh" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" Sep 9 05:36:51.412200 containerd[1536]: 2025-09-09 05:36:51.369 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea8708c0a78 ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-6n9zh" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" Sep 9 05:36:51.412200 containerd[1536]: 2025-09-09 05:36:51.376 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-6n9zh" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" Sep 9 05:36:51.412278 containerd[1536]: 2025-09-09 05:36:51.378 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-6n9zh" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0", GenerateName:"calico-apiserver-79f7b7478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f35fef50-a2f4-447d-b45d-e83823339dd4", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79f7b7478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc", Pod:"calico-apiserver-79f7b7478c-6n9zh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.66.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliea8708c0a78", MAC:"7a:56:cc:67:14:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:51.412359 containerd[1536]: 2025-09-09 05:36:51.404 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" Namespace="calico-apiserver" Pod="calico-apiserver-79f7b7478c-6n9zh" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--apiserver--79f7b7478c--6n9zh-eth0" Sep 9 05:36:51.455972 containerd[1536]: time="2025-09-09T05:36:51.455880697Z" level=info msg="connecting to shim 919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc" address="unix:///run/containerd/s/29368254d4412c1ff6b2956ac439a25c963ed81ef9637c5601ba1cd3565ea5dc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:51.534907 systemd[1]: Started cri-containerd-919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc.scope - libcontainer container 919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc. Sep 9 05:36:51.624529 containerd[1536]: time="2025-09-09T05:36:51.624422028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79f7b7478c-6n9zh,Uid:f35fef50-a2f4-447d-b45d-e83823339dd4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc\"" Sep 9 05:36:51.964226 systemd-networkd[1447]: calicaafc8e986f: Gained IPv6LL Sep 9 05:36:52.119150 kubelet[2729]: E0909 05:36:52.119085 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:52.121613 containerd[1536]: time="2025-09-09T05:36:52.121176155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6p8l6,Uid:fac6e703-38ee-41ec-b92c-fe35196c41bc,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:52.452915 systemd-networkd[1447]: cali9186795c50a: Link UP Sep 9 05:36:52.459455 systemd-networkd[1447]: cali9186795c50a: Gained carrier Sep 9 05:36:52.507934 containerd[1536]: 2025-09-09 05:36:52.244 [INFO][4299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0 coredns-668d6bf9bc- kube-system fac6e703-38ee-41ec-b92c-fe35196c41bc 838 0 2025-09-09 05:36:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452.0.0-n-41a4a07365 coredns-668d6bf9bc-6p8l6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9186795c50a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6p8l6" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-" Sep 9 05:36:52.507934 containerd[1536]: 2025-09-09 05:36:52.245 [INFO][4299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6p8l6" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" Sep 9 05:36:52.507934 containerd[1536]: 2025-09-09 05:36:52.324 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" HandleID="k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Workload="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.324 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" HandleID="k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Workload="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032ca40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452.0.0-n-41a4a07365", "pod":"coredns-668d6bf9bc-6p8l6", "timestamp":"2025-09-09 05:36:52.32412059 +0000 UTC"}, Hostname:"ci-4452.0.0-n-41a4a07365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.324 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.324 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.324 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-41a4a07365' Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.338 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.362 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.376 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.380 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508266 containerd[1536]: 2025-09-09 05:36:52.389 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508561 containerd[1536]: 2025-09-09 05:36:52.390 [INFO][4313] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.192/26 handle="k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508561 containerd[1536]: 2025-09-09 05:36:52.395 [INFO][4313] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c Sep 9 05:36:52.508561 containerd[1536]: 2025-09-09 05:36:52.404 [INFO][4313] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.192/26 handle="k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508561 containerd[1536]: 2025-09-09 05:36:52.426 [INFO][4313] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.196/26] block=192.168.66.192/26 handle="k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508561 containerd[1536]: 2025-09-09 05:36:52.426 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.196/26] handle="k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:52.508561 containerd[1536]: 2025-09-09 05:36:52.427 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:52.508561 containerd[1536]: 2025-09-09 05:36:52.428 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.196/26] IPv6=[] ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" HandleID="k8s-pod-network.0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Workload="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" Sep 9 05:36:52.509805 containerd[1536]: 2025-09-09 05:36:52.436 [INFO][4299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6p8l6" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fac6e703-38ee-41ec-b92c-fe35196c41bc", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"", Pod:"coredns-668d6bf9bc-6p8l6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9186795c50a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:52.509805 containerd[1536]: 2025-09-09 05:36:52.437 [INFO][4299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.196/32] ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6p8l6" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" Sep 9 05:36:52.509805 containerd[1536]: 2025-09-09 05:36:52.437 [INFO][4299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9186795c50a ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6p8l6" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" Sep 9 05:36:52.509805 containerd[1536]: 2025-09-09 05:36:52.460 [INFO][4299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6p8l6" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" Sep 9 05:36:52.509805 containerd[1536]: 2025-09-09 05:36:52.461 [INFO][4299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6p8l6" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fac6e703-38ee-41ec-b92c-fe35196c41bc", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c", Pod:"coredns-668d6bf9bc-6p8l6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9186795c50a", MAC:"a2:61:bc:89:79:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:52.509805 containerd[1536]: 2025-09-09 05:36:52.500 [INFO][4299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" Namespace="kube-system" Pod="coredns-668d6bf9bc-6p8l6" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--6p8l6-eth0" Sep 9 05:36:52.603883 systemd-networkd[1447]: caliea8708c0a78: Gained IPv6LL Sep 9 05:36:52.644631 containerd[1536]: time="2025-09-09T05:36:52.642952095Z" level=info msg="connecting to shim 0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c" address="unix:///run/containerd/s/1926edf709e90ba40294abf2229d98696fa80f10a01316c36a160b6694fed7b4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:52.793415 systemd[1]: Started cri-containerd-0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c.scope - libcontainer container 0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c. Sep 9 05:36:53.123063 containerd[1536]: time="2025-09-09T05:36:53.122908946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbc46b88c-545l7,Uid:bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:53.125826 containerd[1536]: time="2025-09-09T05:36:53.125518024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6p8l6,Uid:fac6e703-38ee-41ec-b92c-fe35196c41bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c\"" Sep 9 05:36:53.135559 containerd[1536]: time="2025-09-09T05:36:53.135434269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9q6kr,Uid:5cc829cd-94e6-4805-83c2-6c73a3a71220,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:53.164780 kubelet[2729]: E0909 05:36:53.164728 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:53.179582 containerd[1536]: time="2025-09-09T05:36:53.178234540Z" level=info msg="CreateContainer within sandbox \"0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:36:53.258583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513032491.mount: Deactivated successfully. Sep 9 05:36:53.269584 containerd[1536]: time="2025-09-09T05:36:53.269031181Z" level=info msg="Container 23d8002f6595718c7db5bc97b111da4c8c5e242176fefc9c1081efbc6f34694c: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:53.280455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475779941.mount: Deactivated successfully. Sep 9 05:36:53.306490 containerd[1536]: time="2025-09-09T05:36:53.306432563Z" level=info msg="CreateContainer within sandbox \"0dcca0ae78658a41ad885dcc533c2d951071f6987191d852ce96331fef91663c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23d8002f6595718c7db5bc97b111da4c8c5e242176fefc9c1081efbc6f34694c\"" Sep 9 05:36:53.314582 containerd[1536]: time="2025-09-09T05:36:53.313811429Z" level=info msg="StartContainer for \"23d8002f6595718c7db5bc97b111da4c8c5e242176fefc9c1081efbc6f34694c\"" Sep 9 05:36:53.327457 containerd[1536]: time="2025-09-09T05:36:53.327276399Z" level=info msg="connecting to shim 23d8002f6595718c7db5bc97b111da4c8c5e242176fefc9c1081efbc6f34694c" address="unix:///run/containerd/s/1926edf709e90ba40294abf2229d98696fa80f10a01316c36a160b6694fed7b4" protocol=ttrpc version=3 Sep 9 05:36:53.450536 systemd[1]: Started cri-containerd-23d8002f6595718c7db5bc97b111da4c8c5e242176fefc9c1081efbc6f34694c.scope - libcontainer container 23d8002f6595718c7db5bc97b111da4c8c5e242176fefc9c1081efbc6f34694c. Sep 9 05:36:53.658586 containerd[1536]: time="2025-09-09T05:36:53.657169842Z" level=info msg="StartContainer for \"23d8002f6595718c7db5bc97b111da4c8c5e242176fefc9c1081efbc6f34694c\" returns successfully" Sep 9 05:36:53.756053 systemd-networkd[1447]: cali9186795c50a: Gained IPv6LL Sep 9 05:36:53.903906 systemd-networkd[1447]: cali8bffa8f7e64: Link UP Sep 9 05:36:53.908037 systemd-networkd[1447]: cali8bffa8f7e64: Gained carrier Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.402 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0 csi-node-driver- calico-system 5cc829cd-94e6-4805-83c2-6c73a3a71220 712 0 2025-09-09 05:36:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4452.0.0-n-41a4a07365 csi-node-driver-9q6kr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8bffa8f7e64 [] [] }} ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Namespace="calico-system" Pod="csi-node-driver-9q6kr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.403 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Namespace="calico-system" Pod="csi-node-driver-9q6kr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.662 [INFO][4436] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" HandleID="k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Workload="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.663 [INFO][4436] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" HandleID="k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Workload="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000418d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-41a4a07365", "pod":"csi-node-driver-9q6kr", "timestamp":"2025-09-09 05:36:53.662818716 +0000 UTC"}, Hostname:"ci-4452.0.0-n-41a4a07365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.664 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.664 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.665 [INFO][4436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-41a4a07365' Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.697 [INFO][4436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.721 [INFO][4436] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.740 [INFO][4436] ipam/ipam.go 511: Trying affinity for 192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.762 [INFO][4436] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.791 [INFO][4436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.792 [INFO][4436] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.192/26 handle="k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.797 [INFO][4436] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028 Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.828 [INFO][4436] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.192/26 handle="k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.869 [INFO][4436] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.197/26] block=192.168.66.192/26 handle="k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.869 [INFO][4436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.197/26] handle="k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.869 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:53.983745 containerd[1536]: 2025-09-09 05:36:53.869 [INFO][4436] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.197/26] IPv6=[] ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" HandleID="k8s-pod-network.2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Workload="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" Sep 9 05:36:53.988080 containerd[1536]: 2025-09-09 05:36:53.883 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Namespace="calico-system" Pod="csi-node-driver-9q6kr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5cc829cd-94e6-4805-83c2-6c73a3a71220", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"", Pod:"csi-node-driver-9q6kr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8bffa8f7e64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:53.988080 containerd[1536]: 2025-09-09 05:36:53.884 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.197/32] ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Namespace="calico-system" Pod="csi-node-driver-9q6kr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" Sep 9 05:36:53.988080 containerd[1536]: 2025-09-09 05:36:53.885 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bffa8f7e64 ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Namespace="calico-system" Pod="csi-node-driver-9q6kr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" Sep 9 05:36:53.988080 containerd[1536]: 2025-09-09 05:36:53.913 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Namespace="calico-system" Pod="csi-node-driver-9q6kr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" Sep 9 05:36:53.988080 containerd[1536]: 2025-09-09 05:36:53.921 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Namespace="calico-system" Pod="csi-node-driver-9q6kr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5cc829cd-94e6-4805-83c2-6c73a3a71220", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028", Pod:"csi-node-driver-9q6kr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.66.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8bffa8f7e64", MAC:"e6:02:bd:8a:f7:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:53.988080 containerd[1536]: 2025-09-09 05:36:53.951 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" Namespace="calico-system" Pod="csi-node-driver-9q6kr" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-csi--node--driver--9q6kr-eth0" Sep 9 05:36:54.071788 containerd[1536]: time="2025-09-09T05:36:54.071377057Z" level=info msg="connecting to shim 2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028" address="unix:///run/containerd/s/11caff9191eb313c3272c723e4aa4e4e588b1368d9af95b0b63f78917de857bf" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:54.118462 kubelet[2729]: E0909 05:36:54.117590 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:54.122926 containerd[1536]: time="2025-09-09T05:36:54.121347200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6svxj,Uid:4d2c5372-f70a-4bb7-a7eb-f4b8172296a7,Namespace:calico-system,Attempt:0,}" Sep 9 05:36:54.122926 containerd[1536]: time="2025-09-09T05:36:54.122623858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t7jv2,Uid:73ac7708-2947-4b1b-befb-e7b4c7e7afc5,Namespace:kube-system,Attempt:0,}" Sep 9 05:36:54.182900 systemd[1]: Started cri-containerd-2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028.scope - libcontainer container 2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028. Sep 9 05:36:54.251482 systemd-networkd[1447]: caliaeb057a01aa: Link UP Sep 9 05:36:54.255695 systemd-networkd[1447]: caliaeb057a01aa: Gained carrier Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.495 [INFO][4394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0 calico-kube-controllers-5cbc46b88c- calico-system bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4 839 0 2025-09-09 05:36:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cbc46b88c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4452.0.0-n-41a4a07365 calico-kube-controllers-5cbc46b88c-545l7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaeb057a01aa [] [] }} ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Namespace="calico-system" Pod="calico-kube-controllers-5cbc46b88c-545l7" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.497 [INFO][4394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Namespace="calico-system" Pod="calico-kube-controllers-5cbc46b88c-545l7" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.677 [INFO][4443] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" HandleID="k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.678 [INFO][4443] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" HandleID="k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103b50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-41a4a07365", "pod":"calico-kube-controllers-5cbc46b88c-545l7", "timestamp":"2025-09-09 05:36:53.677805952 +0000 UTC"}, Hostname:"ci-4452.0.0-n-41a4a07365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.678 [INFO][4443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.870 [INFO][4443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.872 [INFO][4443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-41a4a07365' Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.898 [INFO][4443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.941 [INFO][4443] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:53.994 [INFO][4443] ipam/ipam.go 511: Trying affinity for 192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.027 [INFO][4443] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.053 [INFO][4443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.053 [INFO][4443] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.192/26 handle="k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.070 [INFO][4443] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681 Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.102 [INFO][4443] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.192/26 handle="k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.136 [INFO][4443] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.198/26] block=192.168.66.192/26 handle="k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.142 [INFO][4443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.198/26] handle="k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.142 [INFO][4443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:54.346156 containerd[1536]: 2025-09-09 05:36:54.142 [INFO][4443] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.198/26] IPv6=[] ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" HandleID="k8s-pod-network.f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Workload="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" Sep 9 05:36:54.348211 containerd[1536]: 2025-09-09 05:36:54.210 [INFO][4394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Namespace="calico-system" Pod="calico-kube-controllers-5cbc46b88c-545l7" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0", GenerateName:"calico-kube-controllers-5cbc46b88c-", Namespace:"calico-system", SelfLink:"", UID:"bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbc46b88c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"", Pod:"calico-kube-controllers-5cbc46b88c-545l7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaeb057a01aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:54.348211 containerd[1536]: 2025-09-09 05:36:54.219 [INFO][4394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.198/32] ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Namespace="calico-system" Pod="calico-kube-controllers-5cbc46b88c-545l7" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" Sep 9 05:36:54.348211 containerd[1536]: 2025-09-09 05:36:54.219 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaeb057a01aa ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Namespace="calico-system" Pod="calico-kube-controllers-5cbc46b88c-545l7" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" Sep 9 05:36:54.348211 containerd[1536]: 2025-09-09 05:36:54.258 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Namespace="calico-system" Pod="calico-kube-controllers-5cbc46b88c-545l7" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" Sep 9 05:36:54.348211 containerd[1536]: 2025-09-09 05:36:54.262 [INFO][4394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Namespace="calico-system" Pod="calico-kube-controllers-5cbc46b88c-545l7" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0", GenerateName:"calico-kube-controllers-5cbc46b88c-", Namespace:"calico-system", SelfLink:"", UID:"bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbc46b88c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681", Pod:"calico-kube-controllers-5cbc46b88c-545l7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.66.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaeb057a01aa", MAC:"4e:7e:17:74:5c:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:54.348211 containerd[1536]: 2025-09-09 05:36:54.303 [INFO][4394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" Namespace="calico-system" Pod="calico-kube-controllers-5cbc46b88c-545l7" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-calico--kube--controllers--5cbc46b88c--545l7-eth0" Sep 9 05:36:54.463654 containerd[1536]: time="2025-09-09T05:36:54.463521722Z" level=info msg="connecting to shim f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681" address="unix:///run/containerd/s/ea430828c1d9ccb2a9da395a00a6c66927bbfeaf80912a554af81d8f3aeba670" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:54.487073 containerd[1536]: time="2025-09-09T05:36:54.486873724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9q6kr,Uid:5cc829cd-94e6-4805-83c2-6c73a3a71220,Namespace:calico-system,Attempt:0,} returns sandbox id \"2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028\"" Sep 9 05:36:54.662394 systemd[1]: Started cri-containerd-f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681.scope - libcontainer container f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681. Sep 9 05:36:54.672153 kubelet[2729]: E0909 05:36:54.671934 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:54.853519 kubelet[2729]: I0909 05:36:54.849396 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6p8l6" podStartSLOduration=46.849363305 podStartE2EDuration="46.849363305s" podCreationTimestamp="2025-09-09 05:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:54.783837125 +0000 UTC m=+52.966333853" watchObservedRunningTime="2025-09-09 05:36:54.849363305 +0000 UTC m=+53.031860026" Sep 9 05:36:55.035387 systemd-networkd[1447]: cali67e683f6ea8: Link UP Sep 9 05:36:55.036908 systemd-networkd[1447]: cali67e683f6ea8: Gained carrier Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.531 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0 goldmane-54d579b49d- calico-system 4d2c5372-f70a-4bb7-a7eb-f4b8172296a7 835 0 2025-09-09 05:36:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4452.0.0-n-41a4a07365 goldmane-54d579b49d-6svxj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali67e683f6ea8 [] [] }} ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Namespace="calico-system" Pod="goldmane-54d579b49d-6svxj" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.532 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Namespace="calico-system" Pod="goldmane-54d579b49d-6svxj" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.769 [INFO][4580] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" HandleID="k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Workload="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.781 [INFO][4580] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" HandleID="k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Workload="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e8310), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4452.0.0-n-41a4a07365", "pod":"goldmane-54d579b49d-6svxj", "timestamp":"2025-09-09 05:36:54.769311487 +0000 UTC"}, Hostname:"ci-4452.0.0-n-41a4a07365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.786 [INFO][4580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.788 [INFO][4580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.788 [INFO][4580] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-41a4a07365' Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.849 [INFO][4580] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.881 [INFO][4580] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.929 [INFO][4580] ipam/ipam.go 511: Trying affinity for 192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.938 [INFO][4580] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.956 [INFO][4580] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.956 [INFO][4580] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.192/26 handle="k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.962 [INFO][4580] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:54.981 [INFO][4580] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.192/26 handle="k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:55.002 [INFO][4580] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.199/26] block=192.168.66.192/26 handle="k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:55.002 [INFO][4580] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.199/26] handle="k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:55.005 [INFO][4580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:55.123021 containerd[1536]: 2025-09-09 05:36:55.006 [INFO][4580] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.199/26] IPv6=[] ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" HandleID="k8s-pod-network.dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Workload="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" Sep 9 05:36:55.127178 containerd[1536]: 2025-09-09 05:36:55.017 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Namespace="calico-system" Pod="goldmane-54d579b49d-6svxj" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4d2c5372-f70a-4bb7-a7eb-f4b8172296a7", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"", Pod:"goldmane-54d579b49d-6svxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67e683f6ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:55.127178 containerd[1536]: 2025-09-09 05:36:55.019 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.199/32] ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Namespace="calico-system" Pod="goldmane-54d579b49d-6svxj" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" Sep 9 05:36:55.127178 containerd[1536]: 2025-09-09 05:36:55.020 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67e683f6ea8 ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Namespace="calico-system" Pod="goldmane-54d579b49d-6svxj" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" Sep 9 05:36:55.127178 containerd[1536]: 2025-09-09 05:36:55.040 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Namespace="calico-system" Pod="goldmane-54d579b49d-6svxj" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" Sep 9 05:36:55.127178 containerd[1536]: 2025-09-09 05:36:55.044 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Namespace="calico-system" Pod="goldmane-54d579b49d-6svxj" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"4d2c5372-f70a-4bb7-a7eb-f4b8172296a7", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d", Pod:"goldmane-54d579b49d-6svxj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.66.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67e683f6ea8", MAC:"4e:4a:d4:11:bf:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:55.127178 containerd[1536]: 2025-09-09 05:36:55.088 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" Namespace="calico-system" Pod="goldmane-54d579b49d-6svxj" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-goldmane--54d579b49d--6svxj-eth0" Sep 9 05:36:55.132590 containerd[1536]: time="2025-09-09T05:36:55.131478148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbc46b88c-545l7,Uid:bb5c32bf-ef45-46d8-b3c3-043c0a4c49f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681\"" Sep 9 05:36:55.218481 containerd[1536]: time="2025-09-09T05:36:55.217720388Z" level=info msg="connecting to shim dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d" address="unix:///run/containerd/s/bf5245eb709413d66013e23ce539e8b637e37af2cd79c3a9c02717b19f579237" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:55.225381 systemd-networkd[1447]: cali4badb96ab71: Link UP Sep 9 05:36:55.230660 systemd-networkd[1447]: cali4badb96ab71: Gained carrier Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:54.538 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0 coredns-668d6bf9bc- kube-system 73ac7708-2947-4b1b-befb-e7b4c7e7afc5 832 0 2025-09-09 05:36:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4452.0.0-n-41a4a07365 coredns-668d6bf9bc-t7jv2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4badb96ab71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-t7jv2" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:54.539 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-t7jv2" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:54.832 [INFO][4585] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" HandleID="k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Workload="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:54.837 [INFO][4585] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" HandleID="k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Workload="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042d6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4452.0.0-n-41a4a07365", "pod":"coredns-668d6bf9bc-t7jv2", "timestamp":"2025-09-09 05:36:54.832368143 +0000 UTC"}, Hostname:"ci-4452.0.0-n-41a4a07365", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:54.839 [INFO][4585] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.003 [INFO][4585] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.003 [INFO][4585] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4452.0.0-n-41a4a07365' Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.034 [INFO][4585] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.092 [INFO][4585] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.116 [INFO][4585] ipam/ipam.go 511: Trying affinity for 192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.124 [INFO][4585] ipam/ipam.go 158: Attempting to load block cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.136 [INFO][4585] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.66.192/26 host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.137 [INFO][4585] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.66.192/26 handle="k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.145 [INFO][4585] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7 Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.168 [INFO][4585] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.66.192/26 handle="k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.197 [INFO][4585] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.66.200/26] block=192.168.66.192/26 handle="k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.198 [INFO][4585] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.66.200/26] handle="k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" host="ci-4452.0.0-n-41a4a07365" Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.198 [INFO][4585] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 05:36:55.296196 containerd[1536]: 2025-09-09 05:36:55.198 [INFO][4585] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.66.200/26] IPv6=[] ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" HandleID="k8s-pod-network.7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Workload="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" Sep 9 05:36:55.297378 containerd[1536]: 2025-09-09 05:36:55.208 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-t7jv2" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73ac7708-2947-4b1b-befb-e7b4c7e7afc5", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"", Pod:"coredns-668d6bf9bc-t7jv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4badb96ab71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:55.297378 containerd[1536]: 2025-09-09 05:36:55.209 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.66.200/32] ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-t7jv2" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" Sep 9 05:36:55.297378 containerd[1536]: 2025-09-09 05:36:55.210 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4badb96ab71 ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-t7jv2" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" Sep 9 05:36:55.297378 containerd[1536]: 2025-09-09 05:36:55.234 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-t7jv2" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" Sep 9 05:36:55.297378 containerd[1536]: 2025-09-09 05:36:55.241 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-t7jv2" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"73ac7708-2947-4b1b-befb-e7b4c7e7afc5", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 5, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4452.0.0-n-41a4a07365", ContainerID:"7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7", Pod:"coredns-668d6bf9bc-t7jv2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.66.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4badb96ab71", MAC:"de:f8:4c:ce:c6:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 05:36:55.297378 containerd[1536]: 2025-09-09 05:36:55.275 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" Namespace="kube-system" Pod="coredns-668d6bf9bc-t7jv2" WorkloadEndpoint="ci--4452.0.0--n--41a4a07365-k8s-coredns--668d6bf9bc--t7jv2-eth0" Sep 9 05:36:55.338334 systemd[1]: Started cri-containerd-dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d.scope - libcontainer container dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d. Sep 9 05:36:55.381890 containerd[1536]: time="2025-09-09T05:36:55.381780422Z" level=info msg="connecting to shim 7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7" address="unix:///run/containerd/s/bdd29bcdd237b55151c9e9eb1a6c10be18c9460ff844a1cb88b4cc8a38d7b6f3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:36:55.444341 systemd[1]: Started cri-containerd-7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7.scope - libcontainer container 7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7. Sep 9 05:36:55.613660 systemd-networkd[1447]: cali8bffa8f7e64: Gained IPv6LL Sep 9 05:36:55.655737 containerd[1536]: time="2025-09-09T05:36:55.655653749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t7jv2,Uid:73ac7708-2947-4b1b-befb-e7b4c7e7afc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7\"" Sep 9 05:36:55.660364 kubelet[2729]: E0909 05:36:55.660331 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:55.668970 containerd[1536]: time="2025-09-09T05:36:55.668887676Z" level=info msg="CreateContainer within sandbox \"7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:36:55.704337 containerd[1536]: time="2025-09-09T05:36:55.704287917Z" level=info msg="Container c792c725653eb7e8a4c24ae28040a8a43460ff63e88c5d7347f9382d9e984b91: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:55.723995 containerd[1536]: time="2025-09-09T05:36:55.723788318Z" level=info msg="CreateContainer within sandbox \"7309ec08f6baddc1233706217a06abf235eff893f88adc79fb3c68de79ffc9b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c792c725653eb7e8a4c24ae28040a8a43460ff63e88c5d7347f9382d9e984b91\"" Sep 9 05:36:55.725708 containerd[1536]: time="2025-09-09T05:36:55.725665112Z" level=info msg="StartContainer for \"c792c725653eb7e8a4c24ae28040a8a43460ff63e88c5d7347f9382d9e984b91\"" Sep 9 05:36:55.729967 containerd[1536]: time="2025-09-09T05:36:55.728358833Z" level=info msg="connecting to shim c792c725653eb7e8a4c24ae28040a8a43460ff63e88c5d7347f9382d9e984b91" address="unix:///run/containerd/s/bdd29bcdd237b55151c9e9eb1a6c10be18c9460ff844a1cb88b4cc8a38d7b6f3" protocol=ttrpc version=3 Sep 9 05:36:55.763620 kubelet[2729]: E0909 05:36:55.762615 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:55.791589 systemd[1]: Started cri-containerd-c792c725653eb7e8a4c24ae28040a8a43460ff63e88c5d7347f9382d9e984b91.scope - libcontainer container c792c725653eb7e8a4c24ae28040a8a43460ff63e88c5d7347f9382d9e984b91. Sep 9 05:36:55.804385 systemd-networkd[1447]: caliaeb057a01aa: Gained IPv6LL Sep 9 05:36:55.823743 containerd[1536]: time="2025-09-09T05:36:55.823687719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-6svxj,Uid:4d2c5372-f70a-4bb7-a7eb-f4b8172296a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d\"" Sep 9 05:36:55.904307 containerd[1536]: time="2025-09-09T05:36:55.903904937Z" level=info msg="StartContainer for \"c792c725653eb7e8a4c24ae28040a8a43460ff63e88c5d7347f9382d9e984b91\" returns successfully" Sep 9 05:36:56.508066 systemd-networkd[1447]: cali4badb96ab71: Gained IPv6LL Sep 9 05:36:56.744836 containerd[1536]: time="2025-09-09T05:36:56.744157488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.747054 containerd[1536]: time="2025-09-09T05:36:56.746314922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 05:36:56.749679 containerd[1536]: time="2025-09-09T05:36:56.749482229Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.753946 containerd[1536]: time="2025-09-09T05:36:56.753855497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:36:56.756361 containerd[1536]: time="2025-09-09T05:36:56.756289634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.930958142s" Sep 9 05:36:56.756361 containerd[1536]: time="2025-09-09T05:36:56.756360477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:36:56.762378 containerd[1536]: time="2025-09-09T05:36:56.761924947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 05:36:56.768246 containerd[1536]: time="2025-09-09T05:36:56.768194465Z" level=info msg="CreateContainer within sandbox \"4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:36:56.776883 kubelet[2729]: E0909 05:36:56.776757 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:56.779620 kubelet[2729]: E0909 05:36:56.779518 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:56.795595 containerd[1536]: time="2025-09-09T05:36:56.792903008Z" level=info msg="Container 3eb56eac0ab4e69da521088b6c0fca5f795270bac62f91b3cb809a7dffe4f6f2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:36:56.819526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941909637.mount: Deactivated successfully. Sep 9 05:36:56.827897 containerd[1536]: time="2025-09-09T05:36:56.827157550Z" level=info msg="CreateContainer within sandbox \"4994572d9f6dfc096a27fdb3361b615950db9ddee45756dd2e59b02980f79a62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3eb56eac0ab4e69da521088b6c0fca5f795270bac62f91b3cb809a7dffe4f6f2\"" Sep 9 05:36:56.832029 containerd[1536]: time="2025-09-09T05:36:56.830604527Z" level=info msg="StartContainer for \"3eb56eac0ab4e69da521088b6c0fca5f795270bac62f91b3cb809a7dffe4f6f2\"" Sep 9 05:36:56.838318 containerd[1536]: time="2025-09-09T05:36:56.838248653Z" level=info msg="connecting to shim 3eb56eac0ab4e69da521088b6c0fca5f795270bac62f91b3cb809a7dffe4f6f2" address="unix:///run/containerd/s/10cbc98837cabcc65a204d5073c097e17c538ec92992cee88c69dacba9613385" protocol=ttrpc version=3 Sep 9 05:36:56.856606 kubelet[2729]: I0909 05:36:56.856490 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t7jv2" podStartSLOduration=48.856441584 podStartE2EDuration="48.856441584s" podCreationTimestamp="2025-09-09 05:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:36:56.817959785 +0000 UTC m=+55.000456506" watchObservedRunningTime="2025-09-09 05:36:56.856441584 +0000 UTC m=+55.038938310" Sep 9 05:36:56.900956 systemd[1]: Started cri-containerd-3eb56eac0ab4e69da521088b6c0fca5f795270bac62f91b3cb809a7dffe4f6f2.scope - libcontainer container 3eb56eac0ab4e69da521088b6c0fca5f795270bac62f91b3cb809a7dffe4f6f2. Sep 9 05:36:57.019973 systemd-networkd[1447]: cali67e683f6ea8: Gained IPv6LL Sep 9 05:36:57.149505 containerd[1536]: time="2025-09-09T05:36:57.149376427Z" level=info msg="StartContainer for \"3eb56eac0ab4e69da521088b6c0fca5f795270bac62f91b3cb809a7dffe4f6f2\" returns successfully" Sep 9 05:36:57.783718 kubelet[2729]: E0909 05:36:57.782937 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:58.787920 kubelet[2729]: E0909 05:36:58.787878 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:36:59.366019 kubelet[2729]: I0909 05:36:59.364818 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79f7b7478c-hnm6t" podStartSLOduration=32.428624671 podStartE2EDuration="38.364793135s" podCreationTimestamp="2025-09-09 05:36:21 +0000 UTC" firstStartedPulling="2025-09-09 05:36:50.824371665 +0000 UTC m=+49.006868389" lastFinishedPulling="2025-09-09 05:36:56.760540132 +0000 UTC m=+54.943036853" observedRunningTime="2025-09-09 05:36:57.802815058 +0000 UTC m=+55.985311774" watchObservedRunningTime="2025-09-09 05:36:59.364793135 +0000 UTC m=+57.547289846" Sep 9 05:37:00.499123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2305613317.mount: Deactivated successfully. Sep 9 05:37:00.550713 containerd[1536]: time="2025-09-09T05:37:00.550645696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 05:37:00.552752 containerd[1536]: time="2025-09-09T05:37:00.552677658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:00.557435 containerd[1536]: time="2025-09-09T05:37:00.557378251Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:00.563150 containerd[1536]: time="2025-09-09T05:37:00.562089508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:00.564399 containerd[1536]: time="2025-09-09T05:37:00.564324691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.802290829s" Sep 9 05:37:00.564770 containerd[1536]: time="2025-09-09T05:37:00.564735769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 05:37:00.570757 containerd[1536]: time="2025-09-09T05:37:00.570513171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 05:37:00.573117 containerd[1536]: time="2025-09-09T05:37:00.572132483Z" level=info msg="CreateContainer within sandbox \"f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 05:37:00.585934 containerd[1536]: time="2025-09-09T05:37:00.585868212Z" level=info msg="Container 196eeeca7422c9cc6b0308888190ffff6a9bf1447592cd29196fe4da5f0eb8a6: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:00.613058 containerd[1536]: time="2025-09-09T05:37:00.612988988Z" level=info msg="CreateContainer within sandbox \"f2b8060b71015535b78c42fa2749480c56d3f7d944dcf5d93a5125f0ced37250\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"196eeeca7422c9cc6b0308888190ffff6a9bf1447592cd29196fe4da5f0eb8a6\"" Sep 9 05:37:00.613916 containerd[1536]: time="2025-09-09T05:37:00.613867896Z" level=info msg="StartContainer for \"196eeeca7422c9cc6b0308888190ffff6a9bf1447592cd29196fe4da5f0eb8a6\"" Sep 9 05:37:00.617752 containerd[1536]: time="2025-09-09T05:37:00.617694130Z" level=info msg="connecting to shim 196eeeca7422c9cc6b0308888190ffff6a9bf1447592cd29196fe4da5f0eb8a6" address="unix:///run/containerd/s/03e05dad8026631e968521da846d69faca0f2c1b67c0a95addc94d72aa9a6990" protocol=ttrpc version=3 Sep 9 05:37:00.684992 systemd[1]: Started cri-containerd-196eeeca7422c9cc6b0308888190ffff6a9bf1447592cd29196fe4da5f0eb8a6.scope - libcontainer container 196eeeca7422c9cc6b0308888190ffff6a9bf1447592cd29196fe4da5f0eb8a6. Sep 9 05:37:00.824780 containerd[1536]: time="2025-09-09T05:37:00.824481977Z" level=info msg="StartContainer for \"196eeeca7422c9cc6b0308888190ffff6a9bf1447592cd29196fe4da5f0eb8a6\" returns successfully" Sep 9 05:37:00.989745 systemd[1]: Started sshd@10-24.199.106.51:22-139.178.89.65:59610.service - OpenSSH per-connection server daemon (139.178.89.65:59610). Sep 9 05:37:01.034276 containerd[1536]: time="2025-09-09T05:37:01.034201542Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:01.035242 containerd[1536]: time="2025-09-09T05:37:01.035094941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 05:37:01.041412 containerd[1536]: time="2025-09-09T05:37:01.040603333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 468.297463ms" Sep 9 05:37:01.042516 containerd[1536]: time="2025-09-09T05:37:01.041524450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 05:37:01.048823 containerd[1536]: time="2025-09-09T05:37:01.048762363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 05:37:01.057364 containerd[1536]: time="2025-09-09T05:37:01.057296427Z" level=info msg="CreateContainer within sandbox \"919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 05:37:01.088206 containerd[1536]: time="2025-09-09T05:37:01.087588369Z" level=info msg="Container 43631cec4e39ae82ed7e1c99909d9921e6ed7995eb80916edb8938caefa13ebe: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:01.134949 containerd[1536]: time="2025-09-09T05:37:01.134722563Z" level=info msg="CreateContainer within sandbox \"919dc477996a1655695cc350eb8e4ebbb3841e8c84bfd660de58d7385e708ddc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"43631cec4e39ae82ed7e1c99909d9921e6ed7995eb80916edb8938caefa13ebe\"" Sep 9 05:37:01.137912 containerd[1536]: time="2025-09-09T05:37:01.137855976Z" level=info msg="StartContainer for \"43631cec4e39ae82ed7e1c99909d9921e6ed7995eb80916edb8938caefa13ebe\"" Sep 9 05:37:01.147055 containerd[1536]: time="2025-09-09T05:37:01.146989047Z" level=info msg="connecting to shim 43631cec4e39ae82ed7e1c99909d9921e6ed7995eb80916edb8938caefa13ebe" address="unix:///run/containerd/s/29368254d4412c1ff6b2956ac439a25c963ed81ef9637c5601ba1cd3565ea5dc" protocol=ttrpc version=3 Sep 9 05:37:01.220892 systemd[1]: Started cri-containerd-43631cec4e39ae82ed7e1c99909d9921e6ed7995eb80916edb8938caefa13ebe.scope - libcontainer container 43631cec4e39ae82ed7e1c99909d9921e6ed7995eb80916edb8938caefa13ebe. Sep 9 05:37:01.251454 sshd[4863]: Accepted publickey for core from 139.178.89.65 port 59610 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:01.257354 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:01.274244 systemd-logind[1492]: New session 10 of user core. Sep 9 05:37:01.284928 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:37:01.313696 containerd[1536]: time="2025-09-09T05:37:01.313634278Z" level=info msg="StartContainer for \"43631cec4e39ae82ed7e1c99909d9921e6ed7995eb80916edb8938caefa13ebe\" returns successfully" Sep 9 05:37:01.924513 kubelet[2729]: I0909 05:37:01.924196 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7ff649d8c5-kdrgr" podStartSLOduration=3.297800139 podStartE2EDuration="14.924156302s" podCreationTimestamp="2025-09-09 05:36:47 +0000 UTC" firstStartedPulling="2025-09-09 05:36:48.941728542 +0000 UTC m=+47.124225252" lastFinishedPulling="2025-09-09 05:37:00.568084709 +0000 UTC m=+58.750581415" observedRunningTime="2025-09-09 05:37:01.924075368 +0000 UTC m=+60.106572154" watchObservedRunningTime="2025-09-09 05:37:01.924156302 +0000 UTC m=+60.106653031" Sep 9 05:37:02.886287 sshd[4889]: Connection closed by 139.178.89.65 port 59610 Sep 9 05:37:02.887164 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:02.915485 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:37:02.917430 systemd[1]: sshd@10-24.199.106.51:22-139.178.89.65:59610.service: Deactivated successfully. Sep 9 05:37:02.924257 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:37:02.932687 systemd-logind[1492]: Removed session 10. Sep 9 05:37:03.493657 containerd[1536]: time="2025-09-09T05:37:03.493198115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:03.496075 containerd[1536]: time="2025-09-09T05:37:03.495992350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 05:37:03.496959 containerd[1536]: time="2025-09-09T05:37:03.496688636Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:03.506012 containerd[1536]: time="2025-09-09T05:37:03.505890622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:03.508112 containerd[1536]: time="2025-09-09T05:37:03.507943071Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.45911747s" Sep 9 05:37:03.508112 containerd[1536]: time="2025-09-09T05:37:03.507992428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 05:37:03.510419 containerd[1536]: time="2025-09-09T05:37:03.510207103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 05:37:03.516154 containerd[1536]: time="2025-09-09T05:37:03.516013365Z" level=info msg="CreateContainer within sandbox \"2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 05:37:03.630876 containerd[1536]: time="2025-09-09T05:37:03.629898790Z" level=info msg="Container 5efec413518cde1521482751bfceeec3659055cb481dccc5bdd054ec17b322e0: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:03.639395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347202412.mount: Deactivated successfully. Sep 9 05:37:03.689133 containerd[1536]: time="2025-09-09T05:37:03.687910241Z" level=info msg="CreateContainer within sandbox \"2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5efec413518cde1521482751bfceeec3659055cb481dccc5bdd054ec17b322e0\"" Sep 9 05:37:03.691680 containerd[1536]: time="2025-09-09T05:37:03.690194319Z" level=info msg="StartContainer for \"5efec413518cde1521482751bfceeec3659055cb481dccc5bdd054ec17b322e0\"" Sep 9 05:37:03.695766 containerd[1536]: time="2025-09-09T05:37:03.695682304Z" level=info msg="connecting to shim 5efec413518cde1521482751bfceeec3659055cb481dccc5bdd054ec17b322e0" address="unix:///run/containerd/s/11caff9191eb313c3272c723e4aa4e4e588b1368d9af95b0b63f78917de857bf" protocol=ttrpc version=3 Sep 9 05:37:03.761995 systemd[1]: Started cri-containerd-5efec413518cde1521482751bfceeec3659055cb481dccc5bdd054ec17b322e0.scope - libcontainer container 5efec413518cde1521482751bfceeec3659055cb481dccc5bdd054ec17b322e0. Sep 9 05:37:03.879459 containerd[1536]: time="2025-09-09T05:37:03.879402683Z" level=info msg="StartContainer for \"5efec413518cde1521482751bfceeec3659055cb481dccc5bdd054ec17b322e0\" returns successfully" Sep 9 05:37:03.989576 kubelet[2729]: I0909 05:37:03.988913 2729 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:37:05.525124 kubelet[2729]: I0909 05:37:05.524995 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79f7b7478c-6n9zh" podStartSLOduration=35.105774769 podStartE2EDuration="44.524959343s" podCreationTimestamp="2025-09-09 05:36:21 +0000 UTC" firstStartedPulling="2025-09-09 05:36:51.627311733 +0000 UTC m=+49.809808448" lastFinishedPulling="2025-09-09 05:37:01.04649631 +0000 UTC m=+59.228993022" observedRunningTime="2025-09-09 05:37:01.955332119 +0000 UTC m=+60.137828843" watchObservedRunningTime="2025-09-09 05:37:05.524959343 +0000 UTC m=+63.707456067" Sep 9 05:37:07.915386 systemd[1]: Started sshd@11-24.199.106.51:22-139.178.89.65:59616.service - OpenSSH per-connection server daemon (139.178.89.65:59616). Sep 9 05:37:08.197120 sshd[4985]: Accepted publickey for core from 139.178.89.65 port 59616 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:08.199255 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:08.218665 systemd-logind[1492]: New session 11 of user core. Sep 9 05:37:08.224476 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:37:09.183364 sshd[4988]: Connection closed by 139.178.89.65 port 59616 Sep 9 05:37:09.185417 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:09.190975 systemd[1]: sshd@11-24.199.106.51:22-139.178.89.65:59616.service: Deactivated successfully. Sep 9 05:37:09.195851 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:37:09.219860 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:37:09.223603 systemd-logind[1492]: Removed session 11. Sep 9 05:37:09.572910 containerd[1536]: time="2025-09-09T05:37:09.572848545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:09.575434 containerd[1536]: time="2025-09-09T05:37:09.575370290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 05:37:09.583521 containerd[1536]: time="2025-09-09T05:37:09.583452594Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:09.638701 containerd[1536]: time="2025-09-09T05:37:09.638636840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:09.647473 containerd[1536]: time="2025-09-09T05:37:09.647368400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.129882841s" Sep 9 05:37:09.647473 containerd[1536]: time="2025-09-09T05:37:09.647452017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 05:37:09.649358 containerd[1536]: time="2025-09-09T05:37:09.649277120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 05:37:09.749190 containerd[1536]: time="2025-09-09T05:37:09.749138509Z" level=info msg="CreateContainer within sandbox \"f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 05:37:09.813247 containerd[1536]: time="2025-09-09T05:37:09.812936739Z" level=info msg="Container 7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:09.891905 containerd[1536]: time="2025-09-09T05:37:09.891132068Z" level=info msg="CreateContainer within sandbox \"f7071ba38df1bcea381bc6b33fd3148d531f89ecb4dade561db9e4f46eeaf681\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355\"" Sep 9 05:37:09.894095 containerd[1536]: time="2025-09-09T05:37:09.892387441Z" level=info msg="StartContainer for \"7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355\"" Sep 9 05:37:09.911148 containerd[1536]: time="2025-09-09T05:37:09.911082493Z" level=info msg="connecting to shim 7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355" address="unix:///run/containerd/s/ea430828c1d9ccb2a9da395a00a6c66927bbfeaf80912a554af81d8f3aeba670" protocol=ttrpc version=3 Sep 9 05:37:10.069856 systemd[1]: Started cri-containerd-7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355.scope - libcontainer container 7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355. Sep 9 05:37:10.384884 containerd[1536]: time="2025-09-09T05:37:10.384816195Z" level=info msg="StartContainer for \"7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355\" returns successfully" Sep 9 05:37:11.106766 kubelet[2729]: I0909 05:37:11.106671 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cbc46b88c-545l7" podStartSLOduration=29.597867919 podStartE2EDuration="44.106643518s" podCreationTimestamp="2025-09-09 05:36:27 +0000 UTC" firstStartedPulling="2025-09-09 05:36:55.139708736 +0000 UTC m=+53.322205435" lastFinishedPulling="2025-09-09 05:37:09.648484343 +0000 UTC m=+67.830981034" observedRunningTime="2025-09-09 05:37:11.106114626 +0000 UTC m=+69.288611336" watchObservedRunningTime="2025-09-09 05:37:11.106643518 +0000 UTC m=+69.289140238" Sep 9 05:37:11.337895 containerd[1536]: time="2025-09-09T05:37:11.337747282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355\" id:\"95f5ad654431c9c8dd030515632b6686ff5cc0d32c01c2e976b298bb690af63f\" pid:5066 exited_at:{seconds:1757396231 nanos:263189408}" Sep 9 05:37:13.671055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3342859047.mount: Deactivated successfully. Sep 9 05:37:14.214330 systemd[1]: Started sshd@12-24.199.106.51:22-139.178.89.65:44482.service - OpenSSH per-connection server daemon (139.178.89.65:44482). Sep 9 05:37:14.468037 sshd[5087]: Accepted publickey for core from 139.178.89.65 port 44482 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:14.472626 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:14.480819 systemd-logind[1492]: New session 12 of user core. Sep 9 05:37:14.487835 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:37:14.902427 containerd[1536]: time="2025-09-09T05:37:14.901813944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:14.907851 containerd[1536]: time="2025-09-09T05:37:14.906986893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 05:37:14.909851 containerd[1536]: time="2025-09-09T05:37:14.909009545Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:14.913784 containerd[1536]: time="2025-09-09T05:37:14.912153271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:14.914020 containerd[1536]: time="2025-09-09T05:37:14.913960104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.264628626s" Sep 9 05:37:14.914020 containerd[1536]: time="2025-09-09T05:37:14.914001669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 05:37:14.932125 containerd[1536]: time="2025-09-09T05:37:14.931782759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 05:37:15.067800 containerd[1536]: time="2025-09-09T05:37:15.067391434Z" level=info msg="CreateContainer within sandbox \"dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 05:37:15.107239 containerd[1536]: time="2025-09-09T05:37:15.104767670Z" level=info msg="Container a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:15.119678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096818210.mount: Deactivated successfully. Sep 9 05:37:15.133964 containerd[1536]: time="2025-09-09T05:37:15.133905572Z" level=info msg="CreateContainer within sandbox \"dc7fc6da98d8565c96d1767e3fb6a5c2501a3e327bd66dc837019aae2730eb6d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78\"" Sep 9 05:37:15.136438 containerd[1536]: time="2025-09-09T05:37:15.136295467Z" level=info msg="StartContainer for \"a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78\"" Sep 9 05:37:15.139733 containerd[1536]: time="2025-09-09T05:37:15.139638575Z" level=info msg="connecting to shim a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78" address="unix:///run/containerd/s/bf5245eb709413d66013e23ce539e8b637e37af2cd79c3a9c02717b19f579237" protocol=ttrpc version=3 Sep 9 05:37:15.257814 systemd[1]: Started cri-containerd-a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78.scope - libcontainer container a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78. Sep 9 05:37:15.315589 sshd[5090]: Connection closed by 139.178.89.65 port 44482 Sep 9 05:37:15.314313 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:15.329409 systemd[1]: sshd@12-24.199.106.51:22-139.178.89.65:44482.service: Deactivated successfully. Sep 9 05:37:15.337251 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:37:15.342470 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:37:15.348121 systemd-logind[1492]: Removed session 12. Sep 9 05:37:15.351918 systemd[1]: Started sshd@13-24.199.106.51:22-139.178.89.65:44484.service - OpenSSH per-connection server daemon (139.178.89.65:44484). Sep 9 05:37:15.393743 containerd[1536]: time="2025-09-09T05:37:15.393618144Z" level=info msg="StartContainer for \"a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78\" returns successfully" Sep 9 05:37:15.468048 sshd[5127]: Accepted publickey for core from 139.178.89.65 port 44484 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:15.473230 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:15.492387 systemd-logind[1492]: New session 13 of user core. Sep 9 05:37:15.499898 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:37:15.734603 sshd[5144]: Connection closed by 139.178.89.65 port 44484 Sep 9 05:37:15.735732 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:15.753475 systemd[1]: sshd@13-24.199.106.51:22-139.178.89.65:44484.service: Deactivated successfully. Sep 9 05:37:15.760901 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:37:15.764099 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:37:15.771293 systemd[1]: Started sshd@14-24.199.106.51:22-139.178.89.65:44496.service - OpenSSH per-connection server daemon (139.178.89.65:44496). Sep 9 05:37:15.773686 systemd-logind[1492]: Removed session 13. Sep 9 05:37:15.856799 sshd[5154]: Accepted publickey for core from 139.178.89.65 port 44496 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:15.859579 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:15.867526 systemd-logind[1492]: New session 14 of user core. Sep 9 05:37:15.877241 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:37:16.035943 sshd[5157]: Connection closed by 139.178.89.65 port 44496 Sep 9 05:37:16.036641 sshd-session[5154]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:16.042326 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:37:16.042474 systemd[1]: sshd@14-24.199.106.51:22-139.178.89.65:44496.service: Deactivated successfully. Sep 9 05:37:16.049014 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:37:16.055215 systemd-logind[1492]: Removed session 14. Sep 9 05:37:16.158620 kubelet[2729]: E0909 05:37:16.158432 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:16.327973 containerd[1536]: time="2025-09-09T05:37:16.326685848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78\" id:\"4d7c5957a3f2c3ef59d958e2c7a27151598d2ebf943e0b3ee4e11f0a0e904f6d\" pid:5185 exited_at:{seconds:1757396236 nanos:325371293}" Sep 9 05:37:16.365986 kubelet[2729]: I0909 05:37:16.361983 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-6svxj" podStartSLOduration=31.260030024 podStartE2EDuration="50.357421275s" podCreationTimestamp="2025-09-09 05:36:26 +0000 UTC" firstStartedPulling="2025-09-09 05:36:55.831641658 +0000 UTC m=+54.014138350" lastFinishedPulling="2025-09-09 05:37:14.929032887 +0000 UTC m=+73.111529601" observedRunningTime="2025-09-09 05:37:16.168344792 +0000 UTC m=+74.350841508" watchObservedRunningTime="2025-09-09 05:37:16.357421275 +0000 UTC m=+74.539918006" Sep 9 05:37:16.766501 containerd[1536]: time="2025-09-09T05:37:16.766385519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:16.767894 containerd[1536]: time="2025-09-09T05:37:16.767643607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 05:37:16.768844 containerd[1536]: time="2025-09-09T05:37:16.768786148Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:16.771602 containerd[1536]: time="2025-09-09T05:37:16.771488398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:37:16.772330 containerd[1536]: time="2025-09-09T05:37:16.772292549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.840461308s" Sep 9 05:37:16.772459 containerd[1536]: time="2025-09-09T05:37:16.772439800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 05:37:16.805963 containerd[1536]: time="2025-09-09T05:37:16.805903034Z" level=info msg="CreateContainer within sandbox \"2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 05:37:16.825796 containerd[1536]: time="2025-09-09T05:37:16.825710873Z" level=info msg="Container 80c28742d6ba665ebbd024cda35ca1d6703f3df4fb3daf5ff1741e8c51fd48de: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:37:16.859316 containerd[1536]: time="2025-09-09T05:37:16.859236139Z" level=info msg="CreateContainer within sandbox \"2274429d59c50b20d67d6aa1ab3155d89cc0ea3b986239396e4369658a063028\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"80c28742d6ba665ebbd024cda35ca1d6703f3df4fb3daf5ff1741e8c51fd48de\"" Sep 9 05:37:16.866605 containerd[1536]: time="2025-09-09T05:37:16.866536377Z" level=info msg="StartContainer for \"80c28742d6ba665ebbd024cda35ca1d6703f3df4fb3daf5ff1741e8c51fd48de\"" Sep 9 05:37:16.870466 containerd[1536]: time="2025-09-09T05:37:16.870400931Z" level=info msg="connecting to shim 80c28742d6ba665ebbd024cda35ca1d6703f3df4fb3daf5ff1741e8c51fd48de" address="unix:///run/containerd/s/11caff9191eb313c3272c723e4aa4e4e588b1368d9af95b0b63f78917de857bf" protocol=ttrpc version=3 Sep 9 05:37:16.902819 systemd[1]: Started cri-containerd-80c28742d6ba665ebbd024cda35ca1d6703f3df4fb3daf5ff1741e8c51fd48de.scope - libcontainer container 80c28742d6ba665ebbd024cda35ca1d6703f3df4fb3daf5ff1741e8c51fd48de. Sep 9 05:37:16.990588 containerd[1536]: time="2025-09-09T05:37:16.990076381Z" level=info msg="StartContainer for \"80c28742d6ba665ebbd024cda35ca1d6703f3df4fb3daf5ff1741e8c51fd48de\" returns successfully" Sep 9 05:37:17.151480 kubelet[2729]: I0909 05:37:17.147947 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9q6kr" podStartSLOduration=28.867167411 podStartE2EDuration="51.143705564s" podCreationTimestamp="2025-09-09 05:36:26 +0000 UTC" firstStartedPulling="2025-09-09 05:36:54.497041701 +0000 UTC m=+52.679538415" lastFinishedPulling="2025-09-09 05:37:16.773579877 +0000 UTC m=+74.956076568" observedRunningTime="2025-09-09 05:37:17.141921436 +0000 UTC m=+75.324418151" watchObservedRunningTime="2025-09-09 05:37:17.143705564 +0000 UTC m=+75.326202285" Sep 9 05:37:17.510572 kubelet[2729]: I0909 05:37:17.506776 2729 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 05:37:17.510572 kubelet[2729]: I0909 05:37:17.510509 2729 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 05:37:18.744609 containerd[1536]: time="2025-09-09T05:37:18.744525926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82\" id:\"203c79601ea1badae874f7814034e0a5d8e11f7bcfca9dd6ca61aa318bf1bfa2\" pid:5250 exit_status:1 exited_at:{seconds:1757396238 nanos:743834301}" Sep 9 05:37:19.118421 kubelet[2729]: E0909 05:37:19.118176 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:21.054525 systemd[1]: Started sshd@15-24.199.106.51:22-139.178.89.65:57834.service - OpenSSH per-connection server daemon (139.178.89.65:57834). Sep 9 05:37:21.226075 sshd[5265]: Accepted publickey for core from 139.178.89.65 port 57834 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:21.228129 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:21.235453 systemd-logind[1492]: New session 15 of user core. Sep 9 05:37:21.240944 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:37:21.922671 sshd[5270]: Connection closed by 139.178.89.65 port 57834 Sep 9 05:37:21.923689 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:21.931989 systemd[1]: sshd@15-24.199.106.51:22-139.178.89.65:57834.service: Deactivated successfully. Sep 9 05:37:21.935366 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:37:21.937799 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:37:21.940124 systemd-logind[1492]: Removed session 15. Sep 9 05:37:26.939801 systemd[1]: Started sshd@16-24.199.106.51:22-139.178.89.65:57842.service - OpenSSH per-connection server daemon (139.178.89.65:57842). Sep 9 05:37:27.029653 sshd[5286]: Accepted publickey for core from 139.178.89.65 port 57842 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:27.032143 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:27.040669 systemd-logind[1492]: New session 16 of user core. Sep 9 05:37:27.046894 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:37:27.370843 sshd[5289]: Connection closed by 139.178.89.65 port 57842 Sep 9 05:37:27.371483 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:27.377865 systemd[1]: sshd@16-24.199.106.51:22-139.178.89.65:57842.service: Deactivated successfully. Sep 9 05:37:27.384145 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:37:27.389830 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:37:27.392052 systemd-logind[1492]: Removed session 16. Sep 9 05:37:32.393569 systemd[1]: Started sshd@17-24.199.106.51:22-139.178.89.65:36184.service - OpenSSH per-connection server daemon (139.178.89.65:36184). Sep 9 05:37:32.495510 containerd[1536]: time="2025-09-09T05:37:32.495164389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78\" id:\"4c574e59e9bd5c9dbb7d2f1d12b905e8227b1941e079f23e87b7a32c15f215e0\" pid:5318 exited_at:{seconds:1757396252 nanos:494712133}" Sep 9 05:37:32.648040 sshd[5328]: Accepted publickey for core from 139.178.89.65 port 36184 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:32.651955 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:32.663996 systemd-logind[1492]: New session 17 of user core. Sep 9 05:37:32.669000 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:37:33.290610 sshd[5332]: Connection closed by 139.178.89.65 port 36184 Sep 9 05:37:33.293334 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:33.316422 systemd[1]: sshd@17-24.199.106.51:22-139.178.89.65:36184.service: Deactivated successfully. Sep 9 05:37:33.316735 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:37:33.321111 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:37:33.325346 systemd-logind[1492]: Removed session 17. Sep 9 05:37:35.376165 containerd[1536]: time="2025-09-09T05:37:35.376111227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355\" id:\"186816a6f7ef9271fe8e9e8517bc6925e863da621cf3ec4286e7c53db6b7a789\" pid:5355 exited_at:{seconds:1757396255 nanos:375269202}" Sep 9 05:37:37.121202 kubelet[2729]: E0909 05:37:37.121122 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:37.122074 kubelet[2729]: E0909 05:37:37.121965 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:38.310507 systemd[1]: Started sshd@18-24.199.106.51:22-139.178.89.65:36200.service - OpenSSH per-connection server daemon (139.178.89.65:36200). Sep 9 05:37:38.443385 sshd[5367]: Accepted publickey for core from 139.178.89.65 port 36200 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:38.446697 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:38.455351 systemd-logind[1492]: New session 18 of user core. Sep 9 05:37:38.458842 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:37:38.793251 sshd[5370]: Connection closed by 139.178.89.65 port 36200 Sep 9 05:37:38.799608 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:38.814439 systemd[1]: sshd@18-24.199.106.51:22-139.178.89.65:36200.service: Deactivated successfully. Sep 9 05:37:38.818440 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:37:38.820736 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:37:38.824286 systemd-logind[1492]: Removed session 18. Sep 9 05:37:38.827408 systemd[1]: Started sshd@19-24.199.106.51:22-139.178.89.65:36202.service - OpenSSH per-connection server daemon (139.178.89.65:36202). Sep 9 05:37:38.916537 sshd[5384]: Accepted publickey for core from 139.178.89.65 port 36202 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:38.918721 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:38.927737 systemd-logind[1492]: New session 19 of user core. Sep 9 05:37:38.935978 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:37:39.335367 sshd[5387]: Connection closed by 139.178.89.65 port 36202 Sep 9 05:37:39.337796 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:39.351779 systemd[1]: sshd@19-24.199.106.51:22-139.178.89.65:36202.service: Deactivated successfully. Sep 9 05:37:39.358628 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:37:39.362433 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:37:39.367363 systemd[1]: Started sshd@20-24.199.106.51:22-139.178.89.65:36212.service - OpenSSH per-connection server daemon (139.178.89.65:36212). Sep 9 05:37:39.372221 systemd-logind[1492]: Removed session 19. Sep 9 05:37:39.507322 sshd[5397]: Accepted publickey for core from 139.178.89.65 port 36212 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:39.510024 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:39.516149 systemd-logind[1492]: New session 20 of user core. Sep 9 05:37:39.529008 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:37:40.302121 sshd[5400]: Connection closed by 139.178.89.65 port 36212 Sep 9 05:37:40.306130 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:40.322817 systemd[1]: Started sshd@21-24.199.106.51:22-139.178.89.65:57540.service - OpenSSH per-connection server daemon (139.178.89.65:57540). Sep 9 05:37:40.324763 systemd[1]: sshd@20-24.199.106.51:22-139.178.89.65:36212.service: Deactivated successfully. Sep 9 05:37:40.339412 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:37:40.344345 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:37:40.353674 systemd-logind[1492]: Removed session 20. Sep 9 05:37:40.452139 sshd[5414]: Accepted publickey for core from 139.178.89.65 port 57540 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:40.455514 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:40.463688 systemd-logind[1492]: New session 21 of user core. Sep 9 05:37:40.470906 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:37:41.226870 containerd[1536]: time="2025-09-09T05:37:41.226808524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e893d3246ce3d3b04a8bd05625bd4e528266cccfc39306cac8363f5df1a2355\" id:\"eb2fb00883bdd9ae763bbdffcda8a0a8383ba6bc959e852a5801d6f9a4b33305\" pid:5441 exited_at:{seconds:1757396261 nanos:226414400}" Sep 9 05:37:41.317914 sshd[5423]: Connection closed by 139.178.89.65 port 57540 Sep 9 05:37:41.320397 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:41.337746 systemd[1]: sshd@21-24.199.106.51:22-139.178.89.65:57540.service: Deactivated successfully. Sep 9 05:37:41.343829 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:37:41.346225 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:37:41.351828 systemd-logind[1492]: Removed session 21. Sep 9 05:37:41.356289 systemd[1]: Started sshd@22-24.199.106.51:22-139.178.89.65:57542.service - OpenSSH per-connection server daemon (139.178.89.65:57542). Sep 9 05:37:41.463657 sshd[5454]: Accepted publickey for core from 139.178.89.65 port 57542 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:41.465569 sshd-session[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:41.473030 systemd-logind[1492]: New session 22 of user core. Sep 9 05:37:41.480836 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:37:41.668167 sshd[5457]: Connection closed by 139.178.89.65 port 57542 Sep 9 05:37:41.667203 sshd-session[5454]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:41.673361 systemd[1]: sshd@22-24.199.106.51:22-139.178.89.65:57542.service: Deactivated successfully. Sep 9 05:37:41.677042 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:37:41.679083 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:37:41.681289 systemd-logind[1492]: Removed session 22. Sep 9 05:37:43.130911 kubelet[2729]: E0909 05:37:43.130811 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 9 05:37:46.265946 containerd[1536]: time="2025-09-09T05:37:46.265866071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a64c2127523bf3af916bc6a99cd5f9d286fc346731d42ec265e1b77a08718c78\" id:\"35c0668021d422b5553aeaae2b0ed9794be868142825760168d782b17cd92179\" pid:5483 exited_at:{seconds:1757396266 nanos:265168549}" Sep 9 05:37:46.693746 systemd[1]: Started sshd@23-24.199.106.51:22-139.178.89.65:57550.service - OpenSSH per-connection server daemon (139.178.89.65:57550). Sep 9 05:37:46.775334 sshd[5494]: Accepted publickey for core from 139.178.89.65 port 57550 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:46.777496 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:46.784984 systemd-logind[1492]: New session 23 of user core. Sep 9 05:37:46.796885 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:37:47.001360 sshd[5497]: Connection closed by 139.178.89.65 port 57550 Sep 9 05:37:47.003667 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:47.011380 systemd[1]: sshd@23-24.199.106.51:22-139.178.89.65:57550.service: Deactivated successfully. Sep 9 05:37:47.015032 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:37:47.016953 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:37:47.021257 systemd-logind[1492]: Removed session 23. Sep 9 05:37:48.730314 containerd[1536]: time="2025-09-09T05:37:48.730250258Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ccde05e9dcd60f64f5b27a69a0f603460744406ccf01864fffb0a7bb19848a82\" id:\"80e1e884e5768b7d19e06b53dfc3dc7e59b701e5ca0488b6642d1d29d542e604\" pid:5521 exited_at:{seconds:1757396268 nanos:729793416}" Sep 9 05:37:52.021598 systemd[1]: Started sshd@24-24.199.106.51:22-139.178.89.65:38034.service - OpenSSH per-connection server daemon (139.178.89.65:38034). Sep 9 05:37:52.155955 sshd[5534]: Accepted publickey for core from 139.178.89.65 port 38034 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:52.158606 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:52.166639 systemd-logind[1492]: New session 24 of user core. Sep 9 05:37:52.173883 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:37:52.463254 sshd[5537]: Connection closed by 139.178.89.65 port 38034 Sep 9 05:37:52.463173 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:52.472285 systemd[1]: sshd@24-24.199.106.51:22-139.178.89.65:38034.service: Deactivated successfully. Sep 9 05:37:52.475044 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:37:52.476771 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:37:52.478798 systemd-logind[1492]: Removed session 24. Sep 9 05:37:57.479718 systemd[1]: Started sshd@25-24.199.106.51:22-139.178.89.65:38038.service - OpenSSH per-connection server daemon (139.178.89.65:38038). Sep 9 05:37:57.608355 sshd[5549]: Accepted publickey for core from 139.178.89.65 port 38038 ssh2: RSA SHA256:il44XmC7L10b7xXYUsCD784Q5uLKIODTWUSaGZ393Bk Sep 9 05:37:57.613217 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:37:57.621665 systemd-logind[1492]: New session 25 of user core. Sep 9 05:37:57.628971 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:37:58.122616 sshd[5552]: Connection closed by 139.178.89.65 port 38038 Sep 9 05:37:58.126790 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Sep 9 05:37:58.142303 systemd[1]: sshd@25-24.199.106.51:22-139.178.89.65:38038.service: Deactivated successfully. Sep 9 05:37:58.151107 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:37:58.153776 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:37:58.156302 systemd-logind[1492]: Removed session 25.