Aug 13 00:44:31.867548 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:44:31.867593 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:44:31.867607 kernel: BIOS-provided physical RAM map: Aug 13 00:44:31.867619 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:44:31.867629 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:44:31.867639 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:44:31.867651 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 00:44:31.867668 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 00:44:31.867684 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:44:31.867696 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:44:31.867703 kernel: NX (Execute Disable) protection: active Aug 13 00:44:31.867710 kernel: APIC: Static calls initialized Aug 13 00:44:31.867717 kernel: SMBIOS 2.8 present. Aug 13 00:44:31.867724 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 00:44:31.867736 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:44:31.867744 kernel: Hypervisor detected: KVM Aug 13 00:44:31.867754 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:44:31.867762 kernel: kvm-clock: using sched offset of 4779493315 cycles Aug 13 00:44:31.867771 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:44:31.867779 kernel: tsc: Detected 2494.140 MHz processor Aug 13 00:44:31.867788 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:44:31.867796 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:44:31.867808 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 00:44:31.867824 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:44:31.867836 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:44:31.867848 kernel: ACPI: Early table checksum verification disabled Aug 13 00:44:31.867859 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 00:44:31.867871 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:44:31.867882 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:44:31.867894 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:44:31.867906 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:44:31.867917 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:44:31.867933 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:44:31.867945 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:44:31.867958 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:44:31.867970 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 00:44:31.867983 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 00:44:31.867992 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:44:31.868000 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 00:44:31.868010 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 00:44:31.868031 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 00:44:31.868045 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 00:44:31.868059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:44:31.868068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 00:44:31.868077 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Aug 13 00:44:31.868086 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Aug 13 00:44:31.868103 kernel: Zone ranges: Aug 13 00:44:31.868116 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:44:31.868130 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 00:44:31.868143 kernel: Normal empty Aug 13 00:44:31.868151 kernel: Device empty Aug 13 00:44:31.868160 kernel: Movable zone start for each node Aug 13 00:44:31.868168 kernel: Early memory node ranges Aug 13 00:44:31.868176 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:44:31.868184 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 00:44:31.868197 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 00:44:31.868210 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:44:31.868223 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:44:31.868235 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 00:44:31.868247 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:44:31.868258 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:44:31.868275 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:44:31.868287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:44:31.868303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:44:31.868322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:44:31.868338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:44:31.868351 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:44:31.868364 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:44:31.868449 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:44:31.868458 kernel: TSC deadline timer available Aug 13 00:44:31.868467 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:44:31.868475 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:44:31.868483 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:44:31.868492 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:44:31.868505 kernel: CPU topo: Num. cores per package: 2 Aug 13 00:44:31.868513 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:44:31.868521 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:44:31.868530 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:44:31.868539 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 00:44:31.868547 kernel: Booting paravirtualized kernel on KVM Aug 13 00:44:31.868556 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:44:31.868565 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:44:31.868573 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:44:31.868585 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:44:31.868593 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:44:31.868602 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 00:44:31.868612 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:44:31.868621 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:44:31.868629 kernel: random: crng init done Aug 13 00:44:31.868637 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:44:31.868650 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:44:31.868665 kernel: Fallback order for Node 0: 0 Aug 13 00:44:31.868676 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Aug 13 00:44:31.868687 kernel: Policy zone: DMA32 Aug 13 00:44:31.868700 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:44:31.868712 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:44:31.868725 kernel: Kernel/User page tables isolation: enabled Aug 13 00:44:31.868738 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:44:31.868751 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:44:31.868762 kernel: Dynamic Preempt: voluntary Aug 13 00:44:31.868779 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:44:31.868799 kernel: rcu: RCU event tracing is enabled. Aug 13 00:44:31.868812 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:44:31.868826 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:44:31.868839 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:44:31.868851 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:44:31.868863 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:44:31.868872 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:44:31.868881 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:44:31.868903 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:44:31.868914 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:44:31.868922 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:44:31.868930 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:44:31.868939 kernel: Console: colour VGA+ 80x25 Aug 13 00:44:31.868947 kernel: printk: legacy console [tty0] enabled Aug 13 00:44:31.868956 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:44:31.868964 kernel: ACPI: Core revision 20240827 Aug 13 00:44:31.868973 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:44:31.868992 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:44:31.869001 kernel: x2apic enabled Aug 13 00:44:31.869010 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:44:31.869021 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:44:31.869038 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:44:31.869052 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 13 00:44:31.869066 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 00:44:31.869079 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 00:44:31.869107 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:44:31.869125 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:44:31.869140 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:44:31.869155 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:44:31.869166 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:44:31.869175 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:44:31.869184 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 00:44:31.869193 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:44:31.869205 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:44:31.869215 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:44:31.869224 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:44:31.869233 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:44:31.869246 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:44:31.869260 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:44:31.869270 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:44:31.869279 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:44:31.869288 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:44:31.869299 kernel: landlock: Up and running. Aug 13 00:44:31.869308 kernel: SELinux: Initializing. Aug 13 00:44:31.869317 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:44:31.869326 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:44:31.869335 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 00:44:31.869344 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 00:44:31.869353 kernel: signal: max sigframe size: 1776 Aug 13 00:44:31.869361 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:44:31.871425 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:44:31.871461 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:44:31.871476 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:44:31.871487 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:44:31.871496 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:44:31.871511 kernel: .... node #0, CPUs: #1 Aug 13 00:44:31.871521 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:44:31.871535 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 13 00:44:31.871550 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 125140K reserved, 0K cma-reserved) Aug 13 00:44:31.871559 kernel: devtmpfs: initialized Aug 13 00:44:31.871576 kernel: x86/mm: Memory block size: 128MB Aug 13 00:44:31.871590 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:44:31.871604 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:44:31.871618 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:44:31.871632 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:44:31.871645 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:44:31.871658 kernel: audit: type=2000 audit(1755045868.392:1): state=initialized audit_enabled=0 res=1 Aug 13 00:44:31.871667 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:44:31.871676 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:44:31.871688 kernel: cpuidle: using governor menu Aug 13 00:44:31.871698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:44:31.871711 kernel: dca service started, version 1.12.1 Aug 13 00:44:31.871720 kernel: PCI: Using configuration type 1 for base access Aug 13 00:44:31.871729 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:44:31.871743 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:44:31.871755 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:44:31.871764 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:44:31.871778 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:44:31.871796 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:44:31.871805 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:44:31.871824 kernel: ACPI: Interpreter enabled Aug 13 00:44:31.871834 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:44:31.871843 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:44:31.871852 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:44:31.871861 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:44:31.871870 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 00:44:31.871878 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:44:31.872093 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:44:31.872194 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 00:44:31.872291 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 00:44:31.872303 kernel: acpiphp: Slot [3] registered Aug 13 00:44:31.872312 kernel: acpiphp: Slot [4] registered Aug 13 00:44:31.872321 kernel: acpiphp: Slot [5] registered Aug 13 00:44:31.872330 kernel: acpiphp: Slot [6] registered Aug 13 00:44:31.872343 kernel: acpiphp: Slot [7] registered Aug 13 00:44:31.872352 kernel: acpiphp: Slot [8] registered Aug 13 00:44:31.872360 kernel: acpiphp: Slot [9] registered Aug 13 00:44:31.874410 kernel: acpiphp: Slot [10] registered Aug 13 00:44:31.874450 kernel: acpiphp: Slot [11] registered Aug 13 00:44:31.874461 kernel: acpiphp: Slot [12] registered Aug 13 00:44:31.874472 kernel: acpiphp: Slot [13] registered Aug 13 00:44:31.874485 kernel: acpiphp: Slot [14] registered Aug 13 00:44:31.874495 kernel: acpiphp: Slot [15] registered Aug 13 00:44:31.874528 kernel: acpiphp: Slot [16] registered Aug 13 00:44:31.874545 kernel: acpiphp: Slot [17] registered Aug 13 00:44:31.874554 kernel: acpiphp: Slot [18] registered Aug 13 00:44:31.874563 kernel: acpiphp: Slot [19] registered Aug 13 00:44:31.874575 kernel: acpiphp: Slot [20] registered Aug 13 00:44:31.874585 kernel: acpiphp: Slot [21] registered Aug 13 00:44:31.874598 kernel: acpiphp: Slot [22] registered Aug 13 00:44:31.874612 kernel: acpiphp: Slot [23] registered Aug 13 00:44:31.874626 kernel: acpiphp: Slot [24] registered Aug 13 00:44:31.874640 kernel: acpiphp: Slot [25] registered Aug 13 00:44:31.874656 kernel: acpiphp: Slot [26] registered Aug 13 00:44:31.874665 kernel: acpiphp: Slot [27] registered Aug 13 00:44:31.874674 kernel: acpiphp: Slot [28] registered Aug 13 00:44:31.874683 kernel: acpiphp: Slot [29] registered Aug 13 00:44:31.874692 kernel: acpiphp: Slot [30] registered Aug 13 00:44:31.874701 kernel: acpiphp: Slot [31] registered Aug 13 00:44:31.874714 kernel: PCI host bridge to bus 0000:00 Aug 13 00:44:31.874931 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:44:31.875033 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:44:31.875183 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:44:31.875274 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 00:44:31.875477 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 00:44:31.875594 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:44:31.875726 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:44:31.875871 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:44:31.876017 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Aug 13 00:44:31.876126 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Aug 13 00:44:31.876227 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Aug 13 00:44:31.877399 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Aug 13 00:44:31.877577 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Aug 13 00:44:31.877687 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Aug 13 00:44:31.877878 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Aug 13 00:44:31.878046 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Aug 13 00:44:31.878167 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Aug 13 00:44:31.878298 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 00:44:31.878497 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 00:44:31.878648 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:44:31.878773 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Aug 13 00:44:31.878913 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 00:44:31.879011 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Aug 13 00:44:31.879125 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Aug 13 00:44:31.879252 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:44:31.882103 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:44:31.882297 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Aug 13 00:44:31.882435 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Aug 13 00:44:31.882532 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 00:44:31.882679 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:44:31.882792 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Aug 13 00:44:31.882886 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Aug 13 00:44:31.882987 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 00:44:31.883139 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:44:31.883298 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Aug 13 00:44:31.883427 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Aug 13 00:44:31.883530 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 00:44:31.883658 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:44:31.883770 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Aug 13 00:44:31.883863 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Aug 13 00:44:31.884000 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 00:44:31.884123 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:44:31.884253 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Aug 13 00:44:31.884349 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Aug 13 00:44:31.885418 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 00:44:31.885572 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Aug 13 00:44:31.885692 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Aug 13 00:44:31.885819 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 00:44:31.885837 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:44:31.885851 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:44:31.885865 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:44:31.885879 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:44:31.885892 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:44:31.885906 kernel: iommu: Default domain type: Translated Aug 13 00:44:31.885919 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:44:31.885930 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:44:31.885943 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:44:31.885953 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:44:31.885962 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 00:44:31.886074 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 00:44:31.886212 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 00:44:31.886344 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:44:31.886357 kernel: vgaarb: loaded Aug 13 00:44:31.886367 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:44:31.886391 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:44:31.886407 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:44:31.886416 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:44:31.886426 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:44:31.886435 kernel: pnp: PnP ACPI init Aug 13 00:44:31.886444 kernel: pnp: PnP ACPI: found 4 devices Aug 13 00:44:31.886453 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:44:31.886463 kernel: NET: Registered PF_INET protocol family Aug 13 00:44:31.886472 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:44:31.886483 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:44:31.886498 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:44:31.886507 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:44:31.886517 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 00:44:31.886526 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:44:31.886534 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:44:31.886543 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:44:31.886552 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:44:31.886562 kernel: NET: Registered PF_XDP protocol family Aug 13 00:44:31.886685 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:44:31.886772 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:44:31.886872 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:44:31.886959 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 00:44:31.887060 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 00:44:31.887164 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 00:44:31.887261 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:44:31.887276 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 00:44:31.887397 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27587 usecs Aug 13 00:44:31.887411 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:44:31.887424 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:44:31.887437 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:44:31.887446 kernel: Initialise system trusted keyrings Aug 13 00:44:31.887456 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:44:31.887467 kernel: Key type asymmetric registered Aug 13 00:44:31.887476 kernel: Asymmetric key parser 'x509' registered Aug 13 00:44:31.887485 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:44:31.887498 kernel: io scheduler mq-deadline registered Aug 13 00:44:31.887507 kernel: io scheduler kyber registered Aug 13 00:44:31.887516 kernel: io scheduler bfq registered Aug 13 00:44:31.887525 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:44:31.887535 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 00:44:31.887544 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 00:44:31.887553 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 00:44:31.887562 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:44:31.887571 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:44:31.887583 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:44:31.887592 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:44:31.887601 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:44:31.887770 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:44:31.887913 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:44:31.888044 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:44:31 UTC (1755045871) Aug 13 00:44:31.888144 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 00:44:31.888157 kernel: intel_pstate: CPU model not supported Aug 13 00:44:31.888172 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 13 00:44:31.888181 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:44:31.888190 kernel: Segment Routing with IPv6 Aug 13 00:44:31.888199 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:44:31.888208 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:44:31.888217 kernel: Key type dns_resolver registered Aug 13 00:44:31.888226 kernel: IPI shorthand broadcast: enabled Aug 13 00:44:31.888236 kernel: sched_clock: Marking stable (3503003419, 118941470)->(3646893843, -24948954) Aug 13 00:44:31.888244 kernel: registered taskstats version 1 Aug 13 00:44:31.888256 kernel: Loading compiled-in X.509 certificates Aug 13 00:44:31.888265 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:44:31.888274 kernel: Demotion targets for Node 0: null Aug 13 00:44:31.888284 kernel: Key type .fscrypt registered Aug 13 00:44:31.888297 kernel: Key type fscrypt-provisioning registered Aug 13 00:44:31.888313 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:44:31.888350 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:44:31.888367 kernel: ima: No architecture policies found Aug 13 00:44:31.888399 kernel: clk: Disabling unused clocks Aug 13 00:44:31.888412 kernel: Warning: unable to open an initial console. Aug 13 00:44:31.888429 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:44:31.888446 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:44:31.888460 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:44:31.888473 kernel: Run /init as init process Aug 13 00:44:31.888489 kernel: with arguments: Aug 13 00:44:31.888504 kernel: /init Aug 13 00:44:31.888518 kernel: with environment: Aug 13 00:44:31.888533 kernel: HOME=/ Aug 13 00:44:31.888552 kernel: TERM=linux Aug 13 00:44:31.888566 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:44:31.888583 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:44:31.888603 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:44:31.888619 systemd[1]: Detected virtualization kvm. Aug 13 00:44:31.888634 systemd[1]: Detected architecture x86-64. Aug 13 00:44:31.888649 systemd[1]: Running in initrd. Aug 13 00:44:31.888671 systemd[1]: No hostname configured, using default hostname. Aug 13 00:44:31.888689 systemd[1]: Hostname set to . Aug 13 00:44:31.888703 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:44:31.888717 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:44:31.888732 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:44:31.888748 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:44:31.888764 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:44:31.888781 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:44:31.888802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:44:31.888820 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:44:31.888838 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:44:31.888858 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:44:31.888877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:44:31.888894 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:44:31.888911 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:44:31.888926 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:44:31.888943 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:44:31.888960 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:44:31.889005 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:44:31.889022 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:44:31.889043 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:44:31.889060 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:44:31.889076 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:44:31.889104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:44:31.889120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:44:31.889135 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:44:31.889152 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:44:31.889169 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:44:31.889184 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:44:31.889207 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:44:31.889224 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:44:31.889241 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:44:31.889258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:44:31.889275 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:44:31.889292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:44:31.889313 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:44:31.889328 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:44:31.889417 systemd-journald[212]: Collecting audit messages is disabled. Aug 13 00:44:31.889462 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:44:31.889478 systemd-journald[212]: Journal started Aug 13 00:44:31.889514 systemd-journald[212]: Runtime Journal (/run/log/journal/576e9e830c87411ea020c0d47ae7e8af) is 4.9M, max 39.5M, 34.6M free. Aug 13 00:44:31.893950 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:44:31.853321 systemd-modules-load[214]: Inserted module 'overlay' Aug 13 00:44:31.895444 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:44:31.896565 kernel: Bridge firewalling registered Aug 13 00:44:31.896145 systemd-modules-load[214]: Inserted module 'br_netfilter' Aug 13 00:44:31.900633 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:44:31.906612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:44:31.908945 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:44:31.911449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:44:31.947617 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:44:31.952640 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:44:31.956612 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:44:31.961138 systemd-tmpfiles[229]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:44:31.978633 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:44:31.980177 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:44:31.984580 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:44:31.991428 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:44:31.999531 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:44:32.001625 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:44:32.031403 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:44:32.040857 systemd-resolved[240]: Positive Trust Anchors: Aug 13 00:44:32.040874 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:44:32.040911 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:44:32.045475 systemd-resolved[240]: Defaulting to hostname 'linux'. Aug 13 00:44:32.047436 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:44:32.048771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:44:32.141442 kernel: SCSI subsystem initialized Aug 13 00:44:32.150407 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:44:32.162411 kernel: iscsi: registered transport (tcp) Aug 13 00:44:32.184619 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:44:32.184755 kernel: QLogic iSCSI HBA Driver Aug 13 00:44:32.218234 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:44:32.252247 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:44:32.256173 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:44:32.340250 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:44:32.343332 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:44:32.411495 kernel: raid6: avx2x4 gen() 17147 MB/s Aug 13 00:44:32.428469 kernel: raid6: avx2x2 gen() 21862 MB/s Aug 13 00:44:32.445567 kernel: raid6: avx2x1 gen() 20124 MB/s Aug 13 00:44:32.445691 kernel: raid6: using algorithm avx2x2 gen() 21862 MB/s Aug 13 00:44:32.463691 kernel: raid6: .... xor() 19084 MB/s, rmw enabled Aug 13 00:44:32.463799 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:44:32.487445 kernel: xor: automatically using best checksumming function avx Aug 13 00:44:32.678459 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:44:32.689354 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:44:32.692882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:44:32.734098 systemd-udevd[460]: Using default interface naming scheme 'v255'. Aug 13 00:44:32.742965 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:44:32.748617 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:44:32.788728 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Aug 13 00:44:32.834058 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:44:32.837576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:44:32.915362 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:44:32.919885 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:44:33.019689 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Aug 13 00:44:33.023459 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 00:44:33.029414 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:44:33.050445 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 00:44:33.072745 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:44:33.072880 kernel: GPT:9289727 != 125829119 Aug 13 00:44:33.072901 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:44:33.072920 kernel: GPT:9289727 != 125829119 Aug 13 00:44:33.072953 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:44:33.072972 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:44:33.073392 kernel: ACPI: bus type USB registered Aug 13 00:44:33.074791 kernel: usbcore: registered new interface driver usbfs Aug 13 00:44:33.074860 kernel: usbcore: registered new interface driver hub Aug 13 00:44:33.078288 kernel: usbcore: registered new device driver usb Aug 13 00:44:33.101403 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 00:44:33.105465 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Aug 13 00:44:33.120413 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 00:44:33.124471 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 00:44:33.124825 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 00:44:33.125000 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:44:33.125020 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 00:44:33.132431 kernel: hub 1-0:1.0: USB hub found Aug 13 00:44:33.134415 kernel: hub 1-0:1.0: 2 ports detected Aug 13 00:44:33.157436 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:44:33.181596 kernel: libata version 3.00 loaded. Aug 13 00:44:33.183406 kernel: AES CTR mode by8 optimization enabled Aug 13 00:44:33.183490 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 00:44:33.188399 kernel: scsi host1: ata_piix Aug 13 00:44:33.188718 kernel: scsi host2: ata_piix Aug 13 00:44:33.197503 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Aug 13 00:44:33.197598 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Aug 13 00:44:33.218829 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:44:33.219054 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:44:33.226180 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:44:33.239159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:44:33.244042 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:44:33.304918 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:44:33.324007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:44:33.346230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:44:33.371062 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:44:33.372204 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:44:33.396202 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:44:33.396985 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:44:33.398367 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:44:33.399121 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:44:33.400137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:44:33.402589 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:44:33.405616 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:44:33.431686 disk-uuid[612]: Primary Header is updated. Aug 13 00:44:33.431686 disk-uuid[612]: Secondary Entries is updated. Aug 13 00:44:33.431686 disk-uuid[612]: Secondary Header is updated. Aug 13 00:44:33.441693 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:44:33.451484 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:44:34.462421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:44:34.463981 disk-uuid[615]: The operation has completed successfully. Aug 13 00:44:34.517520 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:44:34.517683 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:44:34.548660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:44:34.579997 sh[631]: Success Aug 13 00:44:34.608451 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:44:34.608572 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:44:34.610458 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:44:34.623717 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Aug 13 00:44:34.689813 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:44:34.693507 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:44:34.706713 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:44:34.720408 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:44:34.720512 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (643) Aug 13 00:44:34.725274 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:44:34.725396 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:44:34.725411 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:44:34.735349 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:44:34.737329 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:44:34.738677 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:44:34.740587 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:44:34.742724 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:44:34.771529 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (672) Aug 13 00:44:34.775240 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:44:34.775336 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:44:34.775702 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:44:34.787472 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:44:34.788788 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:44:34.792207 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:44:34.905929 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:44:34.912662 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:44:34.989745 systemd-networkd[814]: lo: Link UP Aug 13 00:44:34.989761 systemd-networkd[814]: lo: Gained carrier Aug 13 00:44:34.994530 systemd-networkd[814]: Enumeration completed Aug 13 00:44:34.994756 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:44:34.995255 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 00:44:34.995261 systemd-networkd[814]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 00:44:34.999889 ignition[719]: Ignition 2.21.0 Aug 13 00:44:34.995351 systemd[1]: Reached target network.target - Network. Aug 13 00:44:34.999896 ignition[719]: Stage: fetch-offline Aug 13 00:44:34.998320 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:44:34.999949 ignition[719]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:44:34.998326 systemd-networkd[814]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:44:34.999960 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:44:34.999696 systemd-networkd[814]: eth0: Link UP Aug 13 00:44:35.000088 ignition[719]: parsed url from cmdline: "" Aug 13 00:44:34.999904 systemd-networkd[814]: eth1: Link UP Aug 13 00:44:35.000092 ignition[719]: no config URL provided Aug 13 00:44:35.000086 systemd-networkd[814]: eth0: Gained carrier Aug 13 00:44:35.000099 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:44:35.000103 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 00:44:35.000108 ignition[719]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:44:35.002910 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:44:35.000123 ignition[719]: failed to fetch config: resource requires networking Aug 13 00:44:35.005576 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:44:35.000355 ignition[719]: Ignition finished successfully Aug 13 00:44:35.006609 systemd-networkd[814]: eth1: Gained carrier Aug 13 00:44:35.006630 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:44:35.018554 systemd-networkd[814]: eth0: DHCPv4 address 146.190.133.69/20, gateway 146.190.128.1 acquired from 169.254.169.253 Aug 13 00:44:35.031528 systemd-networkd[814]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Aug 13 00:44:35.043729 ignition[822]: Ignition 2.21.0 Aug 13 00:44:35.043750 ignition[822]: Stage: fetch Aug 13 00:44:35.043927 ignition[822]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:44:35.043938 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:44:35.044037 ignition[822]: parsed url from cmdline: "" Aug 13 00:44:35.044041 ignition[822]: no config URL provided Aug 13 00:44:35.044048 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:44:35.044056 ignition[822]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:44:35.044857 ignition[822]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 00:44:35.062329 ignition[822]: GET result: OK Aug 13 00:44:35.062490 ignition[822]: parsing config with SHA512: 9053b75700fd5e47334107b2aae755d8755453489b31acc4370f006f60307d15ff6cdab38ff76a1b7df3845bdd68fde5267ea449e6fc31c1f126efbfa0685689 Aug 13 00:44:35.069408 unknown[822]: fetched base config from "system" Aug 13 00:44:35.069437 unknown[822]: fetched base config from "system" Aug 13 00:44:35.069811 ignition[822]: fetch: fetch complete Aug 13 00:44:35.069444 unknown[822]: fetched user config from "digitalocean" Aug 13 00:44:35.069818 ignition[822]: fetch: fetch passed Aug 13 00:44:35.069878 ignition[822]: Ignition finished successfully Aug 13 00:44:35.072993 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:44:35.074855 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:44:35.122362 ignition[829]: Ignition 2.21.0 Aug 13 00:44:35.123255 ignition[829]: Stage: kargs Aug 13 00:44:35.123491 ignition[829]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:44:35.123502 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:44:35.126468 ignition[829]: kargs: kargs passed Aug 13 00:44:35.127137 ignition[829]: Ignition finished successfully Aug 13 00:44:35.130416 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:44:35.132684 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:44:35.164556 ignition[835]: Ignition 2.21.0 Aug 13 00:44:35.164573 ignition[835]: Stage: disks Aug 13 00:44:35.164791 ignition[835]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:44:35.164807 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:44:35.169763 ignition[835]: disks: disks passed Aug 13 00:44:35.169923 ignition[835]: Ignition finished successfully Aug 13 00:44:35.173048 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:44:35.174503 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:44:35.175522 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:44:35.176554 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:44:35.177321 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:44:35.178239 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:44:35.180397 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:44:35.212767 systemd-fsck[843]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:44:35.216538 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:44:35.219611 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:44:35.358393 kernel: EXT4-fs (vda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:44:35.359587 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:44:35.361463 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:44:35.363890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:44:35.366061 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:44:35.370718 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Aug 13 00:44:35.378688 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:44:35.379851 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:44:35.380669 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:44:35.385645 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:44:35.389611 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:44:35.401412 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (851) Aug 13 00:44:35.404443 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:44:35.406876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:44:35.406997 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:44:35.438895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:44:35.484315 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:44:35.505891 initrd-setup-root[888]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:44:35.516102 coreos-metadata[854]: Aug 13 00:44:35.515 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:44:35.521279 coreos-metadata[853]: Aug 13 00:44:35.521 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:44:35.523041 initrd-setup-root[895]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:44:35.529307 initrd-setup-root[902]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:44:35.532849 coreos-metadata[854]: Aug 13 00:44:35.532 INFO Fetch successful Aug 13 00:44:35.534607 coreos-metadata[853]: Aug 13 00:44:35.534 INFO Fetch successful Aug 13 00:44:35.543470 coreos-metadata[854]: Aug 13 00:44:35.542 INFO wrote hostname ci-4372.1.0-8-f473d4f215 to /sysroot/etc/hostname Aug 13 00:44:35.544777 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:44:35.548522 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Aug 13 00:44:35.548728 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Aug 13 00:44:35.665761 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:44:35.668289 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:44:35.669916 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:44:35.692395 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:44:35.713092 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:44:35.723066 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:44:35.734148 ignition[972]: INFO : Ignition 2.21.0 Aug 13 00:44:35.734148 ignition[972]: INFO : Stage: mount Aug 13 00:44:35.735399 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:44:35.735399 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:44:35.737535 ignition[972]: INFO : mount: mount passed Aug 13 00:44:35.737535 ignition[972]: INFO : Ignition finished successfully Aug 13 00:44:35.739620 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:44:35.741583 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:44:35.765923 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:44:35.799422 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (985) Aug 13 00:44:35.801966 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:44:35.802029 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:44:35.802043 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:44:35.807044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:44:35.845489 ignition[1002]: INFO : Ignition 2.21.0 Aug 13 00:44:35.847263 ignition[1002]: INFO : Stage: files Aug 13 00:44:35.847263 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:44:35.847263 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:44:35.851946 ignition[1002]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:44:35.854437 ignition[1002]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:44:35.854437 ignition[1002]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:44:35.857676 ignition[1002]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:44:35.858434 ignition[1002]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:44:35.858434 ignition[1002]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:44:35.858222 unknown[1002]: wrote ssh authorized keys file for user: core Aug 13 00:44:35.860644 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:44:35.860644 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:44:35.982133 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:44:36.244142 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:44:36.244142 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:44:36.245801 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:44:36.245801 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:44:36.245801 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:44:36.245801 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:44:36.245801 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:44:36.245801 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:44:36.245801 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:44:36.253305 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:44:36.253305 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:44:36.253305 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:44:36.253305 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:44:36.253305 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:44:36.253305 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:44:36.522739 systemd-networkd[814]: eth0: Gained IPv6LL Aug 13 00:44:36.970672 systemd-networkd[814]: eth1: Gained IPv6LL Aug 13 00:44:37.069251 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:44:37.656348 ignition[1002]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:44:37.656348 ignition[1002]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:44:37.658658 ignition[1002]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:44:37.658658 ignition[1002]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:44:37.658658 ignition[1002]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:44:37.658658 ignition[1002]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:44:37.658658 ignition[1002]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:44:37.658658 ignition[1002]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:44:37.658658 ignition[1002]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:44:37.658658 ignition[1002]: INFO : files: files passed Aug 13 00:44:37.658658 ignition[1002]: INFO : Ignition finished successfully Aug 13 00:44:37.660912 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:44:37.666061 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:44:37.667844 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:44:37.685805 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:44:37.686862 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:44:37.699248 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:44:37.699248 initrd-setup-root-after-ignition[1031]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:44:37.701468 initrd-setup-root-after-ignition[1035]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:44:37.703690 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:44:37.705591 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:44:37.707822 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:44:37.770501 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:44:37.770720 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:44:37.772537 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:44:37.773510 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:44:37.774180 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:44:37.775683 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:44:37.801981 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:44:37.805587 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:44:37.833411 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:44:37.834851 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:44:37.835529 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:44:37.836095 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:44:37.836333 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:44:37.839585 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:44:37.840029 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:44:37.840806 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:44:37.841638 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:44:37.842476 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:44:37.843235 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:44:37.844075 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:44:37.844849 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:44:37.845927 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:44:37.846872 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:44:37.847676 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:44:37.848275 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:44:37.848452 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:44:37.849319 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:44:37.849859 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:44:37.850729 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:44:37.850993 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:44:37.851470 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:44:37.851655 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:44:37.852667 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:44:37.852836 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:44:37.853951 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:44:37.854104 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:44:37.854635 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:44:37.854779 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:44:37.856398 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:44:37.860687 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:44:37.861091 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:44:37.861343 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:44:37.864621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:44:37.864756 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:44:37.870544 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:44:37.874697 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:44:37.900398 ignition[1055]: INFO : Ignition 2.21.0 Aug 13 00:44:37.900398 ignition[1055]: INFO : Stage: umount Aug 13 00:44:37.902712 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:44:37.902712 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:44:37.906440 ignition[1055]: INFO : umount: umount passed Aug 13 00:44:37.907700 ignition[1055]: INFO : Ignition finished successfully Aug 13 00:44:37.908181 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:44:37.910782 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:44:37.911495 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:44:37.913095 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:44:37.913280 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:44:37.914790 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:44:37.914899 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:44:37.915693 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:44:37.915758 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:44:37.916282 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:44:37.916338 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:44:37.916927 systemd[1]: Stopped target network.target - Network. Aug 13 00:44:37.917597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:44:37.917665 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:44:37.918274 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:44:37.918917 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:44:37.920500 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:44:37.921283 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:44:37.922141 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:44:37.922791 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:44:37.922854 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:44:37.923341 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:44:37.923400 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:44:37.923904 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:44:37.923999 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:44:37.924561 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:44:37.924616 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:44:37.925099 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:44:37.925244 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:44:37.925950 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:44:37.926573 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:44:37.931629 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:44:37.931810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:44:37.936439 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:44:37.936925 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:44:37.937193 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:44:37.939342 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:44:37.940817 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:44:37.942031 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:44:37.942084 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:44:37.943778 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:44:37.944117 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:44:37.944181 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:44:37.944679 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:44:37.944733 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:44:37.945520 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:44:37.945567 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:44:37.946017 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:44:37.946063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:44:37.946912 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:44:37.951795 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:44:37.951907 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:44:37.966144 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:44:37.966340 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:44:37.969599 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:44:37.969743 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:44:37.971039 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:44:37.971093 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:44:37.971952 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:44:37.972031 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:44:37.972888 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:44:37.972949 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:44:37.973549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:44:37.973606 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:44:37.975064 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:44:37.976653 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:44:37.976746 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:44:37.980602 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:44:37.980721 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:44:37.982725 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:44:37.982809 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:44:37.984172 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:44:37.984243 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:44:37.987695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:44:37.987781 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:44:37.995874 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 00:44:37.995988 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 00:44:37.996035 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:44:37.996093 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:44:37.996874 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:44:37.997035 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:44:37.998508 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:44:37.998658 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:44:38.000884 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:44:38.006560 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:44:38.046038 systemd[1]: Switching root. Aug 13 00:44:38.089790 systemd-journald[212]: Journal stopped Aug 13 00:44:39.424538 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Aug 13 00:44:39.424643 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:44:39.424666 kernel: SELinux: policy capability open_perms=1 Aug 13 00:44:39.424692 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:44:39.424710 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:44:39.424728 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:44:39.424747 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:44:39.424771 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:44:39.424789 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:44:39.424807 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:44:39.424825 kernel: audit: type=1403 audit(1755045878.256:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:44:39.424849 systemd[1]: Successfully loaded SELinux policy in 50.835ms. Aug 13 00:44:39.424887 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.785ms. Aug 13 00:44:39.424908 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:44:39.424928 systemd[1]: Detected virtualization kvm. Aug 13 00:44:39.424949 systemd[1]: Detected architecture x86-64. Aug 13 00:44:39.424967 systemd[1]: Detected first boot. Aug 13 00:44:39.432511 systemd[1]: Hostname set to . Aug 13 00:44:39.432563 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:44:39.432584 zram_generator::config[1099]: No configuration found. Aug 13 00:44:39.432609 kernel: Guest personality initialized and is inactive Aug 13 00:44:39.432630 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:44:39.432649 kernel: Initialized host personality Aug 13 00:44:39.432666 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:44:39.432696 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:44:39.432718 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:44:39.432736 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:44:39.432755 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:44:39.432774 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:44:39.432796 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:44:39.432817 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:44:39.432838 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:44:39.432858 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:44:39.432885 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:44:39.432906 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:44:39.432926 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:44:39.432947 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:44:39.432967 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:44:39.432987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:44:39.433007 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:44:39.433035 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:44:39.433054 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:44:39.433074 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:44:39.433096 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:44:39.433116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:44:39.433151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:44:39.433171 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:44:39.433192 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:44:39.433216 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:44:39.433236 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:44:39.433259 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:44:39.433277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:44:39.433296 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:44:39.433315 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:44:39.433333 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:44:39.433356 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:44:39.433900 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:44:39.433956 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:44:39.433979 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:44:39.434000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:44:39.434046 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:44:39.434071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:44:39.434091 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:44:39.434110 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:44:39.434134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:39.434154 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:44:39.434184 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:44:39.434204 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:44:39.435938 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:44:39.435984 systemd[1]: Reached target machines.target - Containers. Aug 13 00:44:39.436007 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:44:39.436027 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:44:39.436049 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:44:39.436070 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:44:39.436101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:44:39.436122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:44:39.436142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:44:39.436163 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:44:39.436184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:44:39.436207 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:44:39.436227 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:44:39.436247 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:44:39.436266 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:44:39.436292 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:44:39.436317 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:44:39.436337 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:44:39.436357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:44:39.437328 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:44:39.437401 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:44:39.437426 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:44:39.437446 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:44:39.437465 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:44:39.437483 systemd[1]: Stopped verity-setup.service. Aug 13 00:44:39.437508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:39.437527 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:44:39.437545 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:44:39.437563 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:44:39.437580 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:44:39.437597 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:44:39.437614 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:44:39.437631 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:44:39.437652 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:44:39.437670 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:44:39.437691 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:44:39.437713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:44:39.437733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:44:39.437753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:44:39.437773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:44:39.437793 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:44:39.437813 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:44:39.437832 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:44:39.437855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:44:39.437876 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:44:39.437896 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:44:39.437915 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:44:39.437935 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:44:39.437954 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:44:39.437974 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:44:39.437998 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:44:39.438019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:44:39.438042 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:44:39.438064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:44:39.438083 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:44:39.438104 kernel: fuse: init (API version 7.41) Aug 13 00:44:39.438125 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:44:39.438145 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:44:39.438165 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:44:39.438184 kernel: ACPI: bus type drm_connector registered Aug 13 00:44:39.438206 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:44:39.438225 kernel: loop: module loaded Aug 13 00:44:39.438249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:44:39.438329 systemd-journald[1169]: Collecting audit messages is disabled. Aug 13 00:44:39.438862 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:44:39.438897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:44:39.438919 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:44:39.438939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:44:39.438966 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:44:39.438987 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:44:39.439008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:44:39.439027 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:44:39.439046 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:44:39.439066 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:44:39.439089 systemd-journald[1169]: Journal started Aug 13 00:44:39.439135 systemd-journald[1169]: Runtime Journal (/run/log/journal/576e9e830c87411ea020c0d47ae7e8af) is 4.9M, max 39.5M, 34.6M free. Aug 13 00:44:39.454001 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:44:38.922833 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:44:38.947353 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:44:39.455737 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:44:38.947932 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:44:39.354697 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Aug 13 00:44:39.354719 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Aug 13 00:44:39.497497 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:44:39.500549 kernel: loop0: detected capacity change from 0 to 146240 Aug 13 00:44:39.507851 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:44:39.513064 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:44:39.523809 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:44:39.549434 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:44:39.564864 systemd-journald[1169]: Time spent on flushing to /var/log/journal/576e9e830c87411ea020c0d47ae7e8af is 65.072ms for 1021 entries. Aug 13 00:44:39.564864 systemd-journald[1169]: System Journal (/var/log/journal/576e9e830c87411ea020c0d47ae7e8af) is 8M, max 195.6M, 187.6M free. Aug 13 00:44:39.656664 systemd-journald[1169]: Received client request to flush runtime journal. Aug 13 00:44:39.656717 kernel: loop1: detected capacity change from 0 to 113872 Aug 13 00:44:39.656735 kernel: loop2: detected capacity change from 0 to 224512 Aug 13 00:44:39.571506 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:44:39.635750 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:44:39.648433 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:44:39.663357 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:44:39.686719 kernel: loop3: detected capacity change from 0 to 8 Aug 13 00:44:39.710913 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Aug 13 00:44:39.710935 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Aug 13 00:44:39.717489 kernel: loop4: detected capacity change from 0 to 146240 Aug 13 00:44:39.729236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:44:39.748271 kernel: loop5: detected capacity change from 0 to 113872 Aug 13 00:44:39.764475 kernel: loop6: detected capacity change from 0 to 224512 Aug 13 00:44:39.788403 kernel: loop7: detected capacity change from 0 to 8 Aug 13 00:44:39.789530 (sd-merge)[1250]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 00:44:39.790107 (sd-merge)[1250]: Merged extensions into '/usr'. Aug 13 00:44:39.803525 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:44:39.803543 systemd[1]: Reloading... Aug 13 00:44:39.989434 zram_generator::config[1277]: No configuration found. Aug 13 00:44:40.176666 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:44:40.206410 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:44:40.291733 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:44:40.292057 systemd[1]: Reloading finished in 488 ms. Aug 13 00:44:40.308165 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:44:40.309643 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:44:40.322995 systemd[1]: Starting ensure-sysext.service... Aug 13 00:44:40.327623 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:44:40.364488 systemd[1]: Reload requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:44:40.364509 systemd[1]: Reloading... Aug 13 00:44:40.405652 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:44:40.406076 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:44:40.406636 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:44:40.407584 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:44:40.408915 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:44:40.409448 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Aug 13 00:44:40.409580 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Aug 13 00:44:40.415090 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:44:40.415247 systemd-tmpfiles[1321]: Skipping /boot Aug 13 00:44:40.474249 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:44:40.474267 systemd-tmpfiles[1321]: Skipping /boot Aug 13 00:44:40.527550 zram_generator::config[1363]: No configuration found. Aug 13 00:44:40.649435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:44:40.792077 systemd[1]: Reloading finished in 426 ms. Aug 13 00:44:40.818467 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:44:40.841763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:44:40.854836 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:44:40.859738 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:44:40.871351 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:44:40.877407 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:44:40.882601 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:44:40.895021 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:44:40.901095 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:40.901503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:44:40.908254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:44:40.914222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:44:40.924908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:44:40.926170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:44:40.926422 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:44:40.926596 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:40.933077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:40.933474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:44:40.933920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:44:40.934068 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:44:40.943429 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:44:40.944050 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:40.955664 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:40.956110 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:44:40.963876 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:44:40.964785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:44:40.965021 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:44:40.965259 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:40.969014 systemd[1]: Finished ensure-sysext.service. Aug 13 00:44:40.972008 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:44:40.982188 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:44:41.008498 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:44:41.010206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:44:41.010915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:44:41.023314 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:44:41.025256 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:44:41.027097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:44:41.027962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:44:41.036479 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:44:41.036860 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:44:41.041530 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:44:41.041757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:44:41.042656 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:44:41.043838 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:44:41.044043 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:44:41.045897 systemd-udevd[1397]: Using default interface naming scheme 'v255'. Aug 13 00:44:41.072034 augenrules[1433]: No rules Aug 13 00:44:41.072789 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:44:41.074540 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:44:41.087543 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:44:41.099070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:44:41.103320 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:44:41.114540 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:44:41.298239 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Aug 13 00:44:41.302189 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 00:44:41.302602 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:41.302747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:44:41.304595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:44:41.311643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:44:41.316675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:44:41.317190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:44:41.317239 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:44:41.317272 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:44:41.317291 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:44:41.340061 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:44:41.342638 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:44:41.355356 systemd-networkd[1444]: lo: Link UP Aug 13 00:44:41.355398 systemd-networkd[1444]: lo: Gained carrier Aug 13 00:44:41.356889 systemd-networkd[1444]: Enumeration completed Aug 13 00:44:41.357045 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:44:41.367443 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:44:41.374682 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:44:41.396070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:44:41.397577 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:44:41.421409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:44:41.421739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:44:41.424004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:44:41.452218 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:44:41.452512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:44:41.453912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:44:41.457747 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 00:44:41.459452 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 00:44:41.493240 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:44:41.502107 systemd-resolved[1396]: Positive Trust Anchors: Aug 13 00:44:41.502123 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:44:41.502163 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:44:41.507445 systemd-networkd[1444]: eth0: Configuring with /run/systemd/network/10-ee:8c:02:67:da:93.network. Aug 13 00:44:41.510708 systemd-networkd[1444]: eth0: Link UP Aug 13 00:44:41.510859 systemd-networkd[1444]: eth0: Gained carrier Aug 13 00:44:41.514346 systemd-resolved[1396]: Using system hostname 'ci-4372.1.0-8-f473d4f215'. Aug 13 00:44:41.519597 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:41.520099 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:44:41.521503 systemd[1]: Reached target network.target - Network. Aug 13 00:44:41.521872 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:44:41.523545 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:44:41.524341 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:44:41.525141 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:44:41.526316 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:44:41.527719 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:44:41.528803 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:44:41.529681 systemd-networkd[1444]: eth1: Configuring with /run/systemd/network/10-96:3f:77:b8:43:72.network. Aug 13 00:44:41.530092 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:44:41.530708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:44:41.530749 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:44:41.531556 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:44:41.533993 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:44:41.538150 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:44:41.545714 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:41.546747 systemd-networkd[1444]: eth1: Link UP Aug 13 00:44:41.547486 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:41.550701 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:44:41.551461 systemd-networkd[1444]: eth1: Gained carrier Aug 13 00:44:41.553084 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:44:41.554151 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:44:41.556511 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:41.558628 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:41.563277 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:44:41.565021 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:44:41.567563 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:44:41.576632 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:44:41.576679 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:44:41.577086 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:44:41.577505 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:44:41.577537 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:44:41.579627 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:44:41.583527 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:44:41.587662 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:44:41.592708 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:44:41.598642 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:44:41.602788 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:44:41.603281 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:44:41.611109 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:44:41.624208 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:44:41.631663 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:44:41.637830 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:44:41.641054 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:44:41.648131 jq[1503]: false Aug 13 00:44:41.652779 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:44:41.655392 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:44:41.662790 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:44:41.666443 extend-filesystems[1504]: Found /dev/vda6 Aug 13 00:44:41.668781 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:44:41.678040 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:44:41.684025 extend-filesystems[1504]: Found /dev/vda9 Aug 13 00:44:41.690514 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:44:41.691563 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:44:41.691802 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:44:41.714629 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Refreshing passwd entry cache Aug 13 00:44:41.714635 oslogin_cache_refresh[1505]: Refreshing passwd entry cache Aug 13 00:44:41.725821 extend-filesystems[1504]: Checking size of /dev/vda9 Aug 13 00:44:41.738049 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Failure getting users, quitting Aug 13 00:44:41.738049 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:44:41.738033 oslogin_cache_refresh[1505]: Failure getting users, quitting Aug 13 00:44:41.738055 oslogin_cache_refresh[1505]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:44:41.740402 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Refreshing group entry cache Aug 13 00:44:41.739024 oslogin_cache_refresh[1505]: Refreshing group entry cache Aug 13 00:44:41.748412 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Failure getting groups, quitting Aug 13 00:44:41.748412 google_oslogin_nss_cache[1505]: oslogin_cache_refresh[1505]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:44:41.745641 oslogin_cache_refresh[1505]: Failure getting groups, quitting Aug 13 00:44:41.748431 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:44:41.745658 oslogin_cache_refresh[1505]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:44:41.749829 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:44:41.756646 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:44:41.757489 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:44:41.790817 (ntainerd)[1537]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:44:41.794971 jq[1519]: true Aug 13 00:44:41.807343 coreos-metadata[1500]: Aug 13 00:44:41.806 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:44:41.822348 coreos-metadata[1500]: Aug 13 00:44:41.817 INFO Fetch successful Aug 13 00:44:41.828615 extend-filesystems[1504]: Resized partition /dev/vda9 Aug 13 00:44:41.869555 extend-filesystems[1547]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:44:41.877688 systemd-logind[1516]: New seat seat0. Aug 13 00:44:41.883004 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:44:41.891541 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 00:44:41.891622 update_engine[1517]: I20250813 00:44:41.889676 1517 main.cc:92] Flatcar Update Engine starting Aug 13 00:44:41.903926 tar[1529]: linux-amd64/LICENSE Aug 13 00:44:41.903926 tar[1529]: linux-amd64/helm Aug 13 00:44:41.899473 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:44:41.899729 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:44:41.905528 dbus-daemon[1501]: [system] SELinux support is enabled Aug 13 00:44:41.905739 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:44:41.912529 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:44:41.912582 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:44:41.914528 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:44:41.914610 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 00:44:41.914630 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:44:41.926299 jq[1544]: true Aug 13 00:44:41.933090 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:44:41.945455 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:44:41.949579 update_engine[1517]: I20250813 00:44:41.947606 1517 update_check_scheduler.cc:74] Next update check in 5m26s Aug 13 00:44:41.964529 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 00:44:41.972081 extend-filesystems[1547]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:44:41.972081 extend-filesystems[1547]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 00:44:41.972081 extend-filesystems[1547]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 00:44:41.977677 extend-filesystems[1504]: Resized filesystem in /dev/vda9 Aug 13 00:44:41.981667 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:44:41.982621 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:44:41.982836 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:44:42.013090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:44:42.020698 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:44:42.038550 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:44:42.042795 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:44:42.094838 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:44:42.139443 bash[1575]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:44:42.142982 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:44:42.154652 systemd[1]: Starting sshkeys.service... Aug 13 00:44:42.198778 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:44:42.230936 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:44:42.238526 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:44:42.266686 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 00:44:42.267100 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:44:42.327337 coreos-metadata[1584]: Aug 13 00:44:42.322 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:44:42.335684 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:44:42.337720 coreos-metadata[1584]: Aug 13 00:44:42.336 INFO Fetch successful Aug 13 00:44:42.350940 unknown[1584]: wrote ssh authorized keys file for user: core Aug 13 00:44:42.363408 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 13 00:44:42.367886 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:44:42.388613 update-ssh-keys[1591]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:44:42.387620 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:44:42.394887 systemd[1]: Finished sshkeys.service. Aug 13 00:44:42.507423 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:44:42.536023 containerd[1537]: time="2025-08-13T00:44:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:44:42.538180 containerd[1537]: time="2025-08-13T00:44:42.537757477Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:44:42.558537 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 00:44:42.558637 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 00:44:42.561716 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:44:42.562758 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 00:44:42.562838 kernel: [drm] features: -context_init Aug 13 00:44:42.563711 kernel: [drm] number of scanouts: 1 Aug 13 00:44:42.563761 kernel: [drm] number of cap sets: 0 Aug 13 00:44:42.563976 containerd[1537]: time="2025-08-13T00:44:42.563900113Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.415µs" Aug 13 00:44:42.564587 containerd[1537]: time="2025-08-13T00:44:42.564548363Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:44:42.566741 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Aug 13 00:44:42.567461 containerd[1537]: time="2025-08-13T00:44:42.567424400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:44:42.568661 containerd[1537]: time="2025-08-13T00:44:42.567842348Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:44:42.568661 containerd[1537]: time="2025-08-13T00:44:42.567875688Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:44:42.568661 containerd[1537]: time="2025-08-13T00:44:42.567910210Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:44:42.568661 containerd[1537]: time="2025-08-13T00:44:42.567982117Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:44:42.568661 containerd[1537]: time="2025-08-13T00:44:42.567998255Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:44:42.570391 containerd[1537]: time="2025-08-13T00:44:42.570220072Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:44:42.570391 containerd[1537]: time="2025-08-13T00:44:42.570270411Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:44:42.570391 containerd[1537]: time="2025-08-13T00:44:42.570286712Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:44:42.570391 containerd[1537]: time="2025-08-13T00:44:42.570294992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:44:42.572625 containerd[1537]: time="2025-08-13T00:44:42.571496738Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:44:42.572625 containerd[1537]: time="2025-08-13T00:44:42.571763929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:44:42.572625 containerd[1537]: time="2025-08-13T00:44:42.571811808Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:44:42.572625 containerd[1537]: time="2025-08-13T00:44:42.571823828Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:44:42.572625 containerd[1537]: time="2025-08-13T00:44:42.571858334Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:44:42.572625 containerd[1537]: time="2025-08-13T00:44:42.572109521Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:44:42.572625 containerd[1537]: time="2025-08-13T00:44:42.572192552Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:44:42.583216 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.583878112Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.583964599Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.583982053Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.583995046Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584040532Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584051742Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584062569Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584086287Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584108158Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584120139Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584130465Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584144073Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584318554Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:44:42.584623 containerd[1537]: time="2025-08-13T00:44:42.584350486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584365778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584399059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584410600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584421005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584434166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584443609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584463454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584476436Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584486432Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584568287Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:44:42.585036 containerd[1537]: time="2025-08-13T00:44:42.584587228Z" level=info msg="Start snapshots syncer" Aug 13 00:44:42.588533 containerd[1537]: time="2025-08-13T00:44:42.585346992Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:44:42.588533 containerd[1537]: time="2025-08-13T00:44:42.587706824Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:44:42.588844 containerd[1537]: time="2025-08-13T00:44:42.587786215Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.590955045Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591153755Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591195394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591208773Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591263003Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591279301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591289895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591308465Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591348272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591359089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591387932Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591435639Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591503325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:44:42.593417 containerd[1537]: time="2025-08-13T00:44:42.591513835Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.591523483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.591531130Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.591540634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.591794055Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.591829680Z" level=info msg="runtime interface created" Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.591835167Z" level=info msg="created NRI interface" Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.591856639Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.592719083Z" level=info msg="Connect containerd service" Aug 13 00:44:42.593796 containerd[1537]: time="2025-08-13T00:44:42.592784012Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:44:42.612030 containerd[1537]: time="2025-08-13T00:44:42.610565182Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:44:42.668124 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:44:42.671744 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:44:42.727727 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:44:42.728028 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:44:42.738588 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:44:42.849462 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:44:42.855758 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:44:42.858892 systemd-networkd[1444]: eth0: Gained IPv6LL Aug 13 00:44:42.863543 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:42.864531 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:44:42.869164 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:44:42.870909 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:44:42.893922 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:44:42.897813 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:44:42.907665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:44:42.910685 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:44:42.932733 systemd-logind[1516]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.978732811Z" level=info msg="Start subscribing containerd event" Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.978805235Z" level=info msg="Start recovering state" Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.978934788Z" level=info msg="Start event monitor" Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.978952844Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.978963154Z" level=info msg="Start streaming server" Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.978974605Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.978985633Z" level=info msg="runtime interface starting up..." Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.978993467Z" level=info msg="starting plugins..." Aug 13 00:44:42.980457 containerd[1537]: time="2025-08-13T00:44:42.979008955Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:44:42.985159 containerd[1537]: time="2025-08-13T00:44:42.982863736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:44:42.985159 containerd[1537]: time="2025-08-13T00:44:42.982989431Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:44:42.983169 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:44:42.986948 containerd[1537]: time="2025-08-13T00:44:42.986468104Z" level=info msg="containerd successfully booted in 0.452705s" Aug 13 00:44:43.036585 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:44:43.073734 systemd-logind[1516]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:44:43.192718 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:44:43.193838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:44:43.194147 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:44:43.198133 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:44:43.201668 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:44:43.220919 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:44:43.242875 systemd-networkd[1444]: eth1: Gained IPv6LL Aug 13 00:44:43.246519 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:43.318492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:44:43.545020 tar[1529]: linux-amd64/README.md Aug 13 00:44:43.564248 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:44:44.340875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:44:44.342004 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:44:44.343033 systemd[1]: Startup finished in 3.570s (kernel) + 6.632s (initrd) + 6.134s (userspace) = 16.337s. Aug 13 00:44:44.347025 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:44:45.017940 kubelet[1668]: E0813 00:44:45.017840 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:44:45.021492 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:44:45.021674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:44:45.022218 systemd[1]: kubelet.service: Consumed 1.386s CPU time, 262.9M memory peak. Aug 13 00:44:45.775362 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:44:45.777663 systemd[1]: Started sshd@0-146.190.133.69:22-139.178.68.195:47922.service - OpenSSH per-connection server daemon (139.178.68.195:47922). Aug 13 00:44:45.883657 sshd[1680]: Accepted publickey for core from 139.178.68.195 port 47922 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:44:45.885635 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:45.898793 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:44:45.899954 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:44:45.902899 systemd-logind[1516]: New session 1 of user core. Aug 13 00:44:45.936099 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:44:45.939698 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:44:45.964551 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:44:45.968335 systemd-logind[1516]: New session c1 of user core. Aug 13 00:44:46.154632 systemd[1684]: Queued start job for default target default.target. Aug 13 00:44:46.164748 systemd[1684]: Created slice app.slice - User Application Slice. Aug 13 00:44:46.164791 systemd[1684]: Reached target paths.target - Paths. Aug 13 00:44:46.164840 systemd[1684]: Reached target timers.target - Timers. Aug 13 00:44:46.166814 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:44:46.182210 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:44:46.182762 systemd[1684]: Reached target sockets.target - Sockets. Aug 13 00:44:46.182992 systemd[1684]: Reached target basic.target - Basic System. Aug 13 00:44:46.183190 systemd[1684]: Reached target default.target - Main User Target. Aug 13 00:44:46.183264 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:44:46.183484 systemd[1684]: Startup finished in 206ms. Aug 13 00:44:46.185833 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:44:46.253679 systemd[1]: Started sshd@1-146.190.133.69:22-139.178.68.195:47936.service - OpenSSH per-connection server daemon (139.178.68.195:47936). Aug 13 00:44:46.315453 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 47936 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:44:46.317084 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:46.323828 systemd-logind[1516]: New session 2 of user core. Aug 13 00:44:46.330679 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:44:46.394250 sshd[1697]: Connection closed by 139.178.68.195 port 47936 Aug 13 00:44:46.395150 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:46.411996 systemd[1]: sshd@1-146.190.133.69:22-139.178.68.195:47936.service: Deactivated successfully. Aug 13 00:44:46.414787 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:44:46.416471 systemd-logind[1516]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:44:46.420296 systemd[1]: Started sshd@2-146.190.133.69:22-139.178.68.195:47944.service - OpenSSH per-connection server daemon (139.178.68.195:47944). Aug 13 00:44:46.421602 systemd-logind[1516]: Removed session 2. Aug 13 00:44:46.480506 sshd[1703]: Accepted publickey for core from 139.178.68.195 port 47944 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:44:46.482737 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:46.489068 systemd-logind[1516]: New session 3 of user core. Aug 13 00:44:46.496650 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:44:46.554359 sshd[1705]: Connection closed by 139.178.68.195 port 47944 Aug 13 00:44:46.555071 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:46.570219 systemd[1]: sshd@2-146.190.133.69:22-139.178.68.195:47944.service: Deactivated successfully. Aug 13 00:44:46.573350 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:44:46.575222 systemd-logind[1516]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:44:46.577983 systemd[1]: Started sshd@3-146.190.133.69:22-139.178.68.195:47950.service - OpenSSH per-connection server daemon (139.178.68.195:47950). Aug 13 00:44:46.579491 systemd-logind[1516]: Removed session 3. Aug 13 00:44:46.643302 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 47950 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:44:46.645243 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:46.652866 systemd-logind[1516]: New session 4 of user core. Aug 13 00:44:46.659725 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:44:46.725247 sshd[1713]: Connection closed by 139.178.68.195 port 47950 Aug 13 00:44:46.725992 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:46.744185 systemd[1]: sshd@3-146.190.133.69:22-139.178.68.195:47950.service: Deactivated successfully. Aug 13 00:44:46.746524 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:44:46.747455 systemd-logind[1516]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:44:46.751853 systemd[1]: Started sshd@4-146.190.133.69:22-139.178.68.195:47958.service - OpenSSH per-connection server daemon (139.178.68.195:47958). Aug 13 00:44:46.753869 systemd-logind[1516]: Removed session 4. Aug 13 00:44:46.815316 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 47958 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:44:46.816902 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:46.822441 systemd-logind[1516]: New session 5 of user core. Aug 13 00:44:46.829694 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:44:46.902047 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:44:46.902548 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:44:46.920022 sudo[1722]: pam_unix(sudo:session): session closed for user root Aug 13 00:44:46.925093 sshd[1721]: Connection closed by 139.178.68.195 port 47958 Aug 13 00:44:46.923918 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:46.939129 systemd[1]: sshd@4-146.190.133.69:22-139.178.68.195:47958.service: Deactivated successfully. Aug 13 00:44:46.941469 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:44:46.942804 systemd-logind[1516]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:44:46.948275 systemd[1]: Started sshd@5-146.190.133.69:22-139.178.68.195:47960.service - OpenSSH per-connection server daemon (139.178.68.195:47960). Aug 13 00:44:46.949976 systemd-logind[1516]: Removed session 5. Aug 13 00:44:47.012861 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 47960 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:44:47.015070 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:47.022512 systemd-logind[1516]: New session 6 of user core. Aug 13 00:44:47.028709 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:44:47.093410 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:44:47.094133 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:44:47.100645 sudo[1732]: pam_unix(sudo:session): session closed for user root Aug 13 00:44:47.108302 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:44:47.108817 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:44:47.120857 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:44:47.175553 augenrules[1754]: No rules Aug 13 00:44:47.177096 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:44:47.177732 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:44:47.179008 sudo[1731]: pam_unix(sudo:session): session closed for user root Aug 13 00:44:47.184413 sshd[1730]: Connection closed by 139.178.68.195 port 47960 Aug 13 00:44:47.183854 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Aug 13 00:44:47.195594 systemd[1]: sshd@5-146.190.133.69:22-139.178.68.195:47960.service: Deactivated successfully. Aug 13 00:44:47.197918 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:44:47.201650 systemd-logind[1516]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:44:47.205898 systemd[1]: Started sshd@6-146.190.133.69:22-139.178.68.195:47964.service - OpenSSH per-connection server daemon (139.178.68.195:47964). Aug 13 00:44:47.207740 systemd-logind[1516]: Removed session 6. Aug 13 00:44:47.271424 sshd[1763]: Accepted publickey for core from 139.178.68.195 port 47964 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:44:47.274305 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:44:47.280864 systemd-logind[1516]: New session 7 of user core. Aug 13 00:44:47.291772 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:44:47.353500 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:44:47.353905 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:44:47.876681 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:44:47.901963 (dockerd)[1784]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:44:48.317353 dockerd[1784]: time="2025-08-13T00:44:48.316732724Z" level=info msg="Starting up" Aug 13 00:44:48.321669 dockerd[1784]: time="2025-08-13T00:44:48.321573314Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:44:48.392879 dockerd[1784]: time="2025-08-13T00:44:48.392822597Z" level=info msg="Loading containers: start." Aug 13 00:44:48.407412 kernel: Initializing XFRM netlink socket Aug 13 00:44:48.650175 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:48.650289 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:48.665628 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:48.708054 systemd-networkd[1444]: docker0: Link UP Aug 13 00:44:48.708448 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Aug 13 00:44:48.711774 dockerd[1784]: time="2025-08-13T00:44:48.711707191Z" level=info msg="Loading containers: done." Aug 13 00:44:48.729890 dockerd[1784]: time="2025-08-13T00:44:48.729825148Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:44:48.732580 dockerd[1784]: time="2025-08-13T00:44:48.729948040Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:44:48.732580 dockerd[1784]: time="2025-08-13T00:44:48.730071441Z" level=info msg="Initializing buildkit" Aug 13 00:44:48.732732 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck926415986-merged.mount: Deactivated successfully. Aug 13 00:44:48.757400 dockerd[1784]: time="2025-08-13T00:44:48.757324229Z" level=info msg="Completed buildkit initialization" Aug 13 00:44:48.766565 dockerd[1784]: time="2025-08-13T00:44:48.766488067Z" level=info msg="Daemon has completed initialization" Aug 13 00:44:48.767492 dockerd[1784]: time="2025-08-13T00:44:48.767073682Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:44:48.767361 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:44:49.753963 containerd[1537]: time="2025-08-13T00:44:49.753905588Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:44:50.311607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255199558.mount: Deactivated successfully. Aug 13 00:44:51.488446 containerd[1537]: time="2025-08-13T00:44:51.488394581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:51.490514 containerd[1537]: time="2025-08-13T00:44:51.490463997Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 00:44:51.490974 containerd[1537]: time="2025-08-13T00:44:51.490918531Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:51.494802 containerd[1537]: time="2025-08-13T00:44:51.494722497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:51.496127 containerd[1537]: time="2025-08-13T00:44:51.495903349Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.741944345s" Aug 13 00:44:51.496127 containerd[1537]: time="2025-08-13T00:44:51.495964531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:44:51.496738 containerd[1537]: time="2025-08-13T00:44:51.496663138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:44:52.988069 containerd[1537]: time="2025-08-13T00:44:52.988020371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:52.989963 containerd[1537]: time="2025-08-13T00:44:52.989904505Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 00:44:52.990989 containerd[1537]: time="2025-08-13T00:44:52.990943265Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:52.993536 containerd[1537]: time="2025-08-13T00:44:52.993461614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:52.994766 containerd[1537]: time="2025-08-13T00:44:52.994551477Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.497795308s" Aug 13 00:44:52.994766 containerd[1537]: time="2025-08-13T00:44:52.994588950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:44:52.995141 containerd[1537]: time="2025-08-13T00:44:52.995110925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:44:54.216174 containerd[1537]: time="2025-08-13T00:44:54.216108930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:54.217413 containerd[1537]: time="2025-08-13T00:44:54.217254262Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 00:44:54.218117 containerd[1537]: time="2025-08-13T00:44:54.218044149Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:54.221424 containerd[1537]: time="2025-08-13T00:44:54.221051568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:54.222459 containerd[1537]: time="2025-08-13T00:44:54.222406800Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.227253266s" Aug 13 00:44:54.222735 containerd[1537]: time="2025-08-13T00:44:54.222613985Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:44:54.223548 containerd[1537]: time="2025-08-13T00:44:54.223507465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:44:55.147257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:44:55.149922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:44:55.323061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1924691277.mount: Deactivated successfully. Aug 13 00:44:55.374190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:44:55.386366 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:44:55.474079 kubelet[2072]: E0813 00:44:55.473524 2072 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:44:55.478958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:44:55.479130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:44:55.479606 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110.8M memory peak. Aug 13 00:44:56.056194 containerd[1537]: time="2025-08-13T00:44:56.056135991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:56.057408 containerd[1537]: time="2025-08-13T00:44:56.057323919Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 00:44:56.058359 containerd[1537]: time="2025-08-13T00:44:56.058285111Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:56.059997 containerd[1537]: time="2025-08-13T00:44:56.059932131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:56.060946 containerd[1537]: time="2025-08-13T00:44:56.060703387Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.837158434s" Aug 13 00:44:56.060946 containerd[1537]: time="2025-08-13T00:44:56.060766571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:44:56.061540 containerd[1537]: time="2025-08-13T00:44:56.061513194Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:44:56.063196 systemd-resolved[1396]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 00:44:56.576008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66037856.mount: Deactivated successfully. Aug 13 00:44:57.567223 containerd[1537]: time="2025-08-13T00:44:57.567151241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:57.568393 containerd[1537]: time="2025-08-13T00:44:57.568223834Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:44:57.568929 containerd[1537]: time="2025-08-13T00:44:57.568892882Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:57.571875 containerd[1537]: time="2025-08-13T00:44:57.571826875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:44:57.573075 containerd[1537]: time="2025-08-13T00:44:57.573038287Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.511405143s" Aug 13 00:44:57.573259 containerd[1537]: time="2025-08-13T00:44:57.573182968Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:44:57.573959 containerd[1537]: time="2025-08-13T00:44:57.573929431Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:44:58.030233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270431290.mount: Deactivated successfully. Aug 13 00:44:58.034701 containerd[1537]: time="2025-08-13T00:44:58.033986752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:44:58.034701 containerd[1537]: time="2025-08-13T00:44:58.034663404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:44:58.035157 containerd[1537]: time="2025-08-13T00:44:58.035133792Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:44:58.036843 containerd[1537]: time="2025-08-13T00:44:58.036811805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:44:58.037701 containerd[1537]: time="2025-08-13T00:44:58.037665167Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.702311ms" Aug 13 00:44:58.037701 containerd[1537]: time="2025-08-13T00:44:58.037700488Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:44:58.038274 containerd[1537]: time="2025-08-13T00:44:58.038233982Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:44:58.529238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2150676630.mount: Deactivated successfully. Aug 13 00:44:59.114630 systemd-resolved[1396]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 00:45:00.440193 containerd[1537]: time="2025-08-13T00:45:00.440096270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:00.460443 containerd[1537]: time="2025-08-13T00:45:00.460332598Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 00:45:00.461197 containerd[1537]: time="2025-08-13T00:45:00.461098644Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:00.464838 containerd[1537]: time="2025-08-13T00:45:00.464740309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:00.466476 containerd[1537]: time="2025-08-13T00:45:00.466269509Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.427564364s" Aug 13 00:45:00.466476 containerd[1537]: time="2025-08-13T00:45:00.466322437Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:45:03.768032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:45:03.768312 systemd[1]: kubelet.service: Consumed 238ms CPU time, 110.8M memory peak. Aug 13 00:45:03.772339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:45:03.820677 systemd[1]: Reload requested from client PID 2217 ('systemctl') (unit session-7.scope)... Aug 13 00:45:03.820697 systemd[1]: Reloading... Aug 13 00:45:04.037535 zram_generator::config[2272]: No configuration found. Aug 13 00:45:04.144234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:45:04.295010 systemd[1]: Reloading finished in 473 ms. Aug 13 00:45:04.360227 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:45:04.367348 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:45:04.367777 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:45:04.367871 systemd[1]: kubelet.service: Consumed 165ms CPU time, 98.8M memory peak. Aug 13 00:45:04.371753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:45:04.570687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:45:04.588030 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:45:04.650915 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:45:04.650915 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:45:04.650915 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:45:04.651517 kubelet[2316]: I0813 00:45:04.650973 2316 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:45:05.223433 kubelet[2316]: I0813 00:45:05.223282 2316 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:45:05.223433 kubelet[2316]: I0813 00:45:05.223353 2316 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:45:05.223842 kubelet[2316]: I0813 00:45:05.223771 2316 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:45:05.261003 kubelet[2316]: E0813 00:45:05.260934 2316 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.133.69:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.133.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:45:05.264486 kubelet[2316]: I0813 00:45:05.264225 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:45:05.278269 kubelet[2316]: I0813 00:45:05.278218 2316 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:45:05.283765 kubelet[2316]: I0813 00:45:05.283703 2316 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:45:05.290236 kubelet[2316]: I0813 00:45:05.289672 2316 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:45:05.290236 kubelet[2316]: I0813 00:45:05.289788 2316 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-8-f473d4f215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:45:05.292070 kubelet[2316]: I0813 00:45:05.292018 2316 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:45:05.292509 kubelet[2316]: I0813 00:45:05.292234 2316 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:45:05.293635 kubelet[2316]: I0813 00:45:05.293599 2316 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:45:05.299927 kubelet[2316]: I0813 00:45:05.299868 2316 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:45:05.300394 kubelet[2316]: I0813 00:45:05.300175 2316 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:45:05.300394 kubelet[2316]: I0813 00:45:05.300226 2316 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:45:05.300394 kubelet[2316]: I0813 00:45:05.300252 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:45:05.303858 kubelet[2316]: W0813 00:45:05.303165 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.133.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-8-f473d4f215&limit=500&resourceVersion=0": dial tcp 146.190.133.69:6443: connect: connection refused Aug 13 00:45:05.303858 kubelet[2316]: E0813 00:45:05.303259 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.133.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-8-f473d4f215&limit=500&resourceVersion=0\": dial tcp 146.190.133.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:45:05.306222 kubelet[2316]: I0813 00:45:05.306174 2316 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:45:05.310543 kubelet[2316]: I0813 00:45:05.310493 2316 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:45:05.311885 kubelet[2316]: W0813 00:45:05.311849 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:45:05.313006 kubelet[2316]: I0813 00:45:05.312971 2316 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:45:05.313324 kubelet[2316]: I0813 00:45:05.313303 2316 server.go:1287] "Started kubelet" Aug 13 00:45:05.313699 kubelet[2316]: W0813 00:45:05.313634 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.133.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.133.69:6443: connect: connection refused Aug 13 00:45:05.313908 kubelet[2316]: E0813 00:45:05.313838 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.133.69:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.133.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:45:05.318505 kubelet[2316]: I0813 00:45:05.318429 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:45:05.324005 kubelet[2316]: I0813 00:45:05.323944 2316 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:45:05.325743 kubelet[2316]: I0813 00:45:05.325688 2316 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:45:05.327737 kubelet[2316]: I0813 00:45:05.327655 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:45:05.328336 kubelet[2316]: I0813 00:45:05.328315 2316 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:45:05.328642 kubelet[2316]: I0813 00:45:05.328565 2316 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:45:05.329271 kubelet[2316]: E0813 00:45:05.329243 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-8-f473d4f215\" not found" Aug 13 00:45:05.341649 kubelet[2316]: I0813 00:45:05.331707 2316 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:45:05.341840 kubelet[2316]: I0813 00:45:05.331967 2316 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:45:05.342722 kubelet[2316]: I0813 00:45:05.342678 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:45:05.343288 kubelet[2316]: E0813 00:45:05.343250 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.133.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-8-f473d4f215?timeout=10s\": dial tcp 146.190.133.69:6443: connect: connection refused" interval="200ms" Aug 13 00:45:05.343437 kubelet[2316]: W0813 00:45:05.343238 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.133.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.133.69:6443: connect: connection refused Aug 13 00:45:05.343578 kubelet[2316]: E0813 00:45:05.343553 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.133.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.133.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:45:05.343965 kubelet[2316]: I0813 00:45:05.343938 2316 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:45:05.344218 kubelet[2316]: I0813 00:45:05.344191 2316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:45:05.346545 kubelet[2316]: E0813 00:45:05.343338 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.133.69:6443/api/v1/namespaces/default/events\": dial tcp 146.190.133.69:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-8-f473d4f215.185b2cf3c6b22f4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-8-f473d4f215,UID:ci-4372.1.0-8-f473d4f215,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-8-f473d4f215,},FirstTimestamp:2025-08-13 00:45:05.313181519 +0000 UTC m=+0.717599839,LastTimestamp:2025-08-13 00:45:05.313181519 +0000 UTC m=+0.717599839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-8-f473d4f215,}" Aug 13 00:45:05.357206 kubelet[2316]: I0813 00:45:05.357170 2316 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:45:05.378811 kubelet[2316]: I0813 00:45:05.378558 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:45:05.381079 kubelet[2316]: I0813 00:45:05.380473 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:45:05.381079 kubelet[2316]: I0813 00:45:05.380519 2316 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:45:05.381079 kubelet[2316]: I0813 00:45:05.380552 2316 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:45:05.381079 kubelet[2316]: I0813 00:45:05.380563 2316 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:45:05.381079 kubelet[2316]: E0813 00:45:05.380647 2316 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:45:05.391966 kubelet[2316]: E0813 00:45:05.391923 2316 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:45:05.394408 kubelet[2316]: W0813 00:45:05.393365 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.133.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.133.69:6443: connect: connection refused Aug 13 00:45:05.394408 kubelet[2316]: E0813 00:45:05.393488 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.133.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.133.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:45:05.400834 kubelet[2316]: I0813 00:45:05.400800 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:45:05.401076 kubelet[2316]: I0813 00:45:05.401062 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:45:05.401169 kubelet[2316]: I0813 00:45:05.401157 2316 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:45:05.402756 kubelet[2316]: I0813 00:45:05.402725 2316 policy_none.go:49] "None policy: Start" Aug 13 00:45:05.403336 kubelet[2316]: I0813 00:45:05.402995 2316 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:45:05.403336 kubelet[2316]: I0813 00:45:05.403030 2316 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:45:05.411698 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:45:05.431119 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:45:05.441957 kubelet[2316]: E0813 00:45:05.441898 2316 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.1.0-8-f473d4f215\" not found" Aug 13 00:45:05.445812 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:45:05.449356 kubelet[2316]: I0813 00:45:05.448461 2316 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:45:05.449356 kubelet[2316]: I0813 00:45:05.448860 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:45:05.449356 kubelet[2316]: I0813 00:45:05.448878 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:45:05.450805 kubelet[2316]: I0813 00:45:05.450780 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:45:05.452669 kubelet[2316]: E0813 00:45:05.452612 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:45:05.452985 kubelet[2316]: E0813 00:45:05.452963 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-8-f473d4f215\" not found" Aug 13 00:45:05.492992 systemd[1]: Created slice kubepods-burstable-pod2474733e70ed6311da0b475039ca1d74.slice - libcontainer container kubepods-burstable-pod2474733e70ed6311da0b475039ca1d74.slice. Aug 13 00:45:05.518650 kubelet[2316]: E0813 00:45:05.518590 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.523700 systemd[1]: Created slice kubepods-burstable-pod5e1f18accffbef0a0617c4fecbea1275.slice - libcontainer container kubepods-burstable-pod5e1f18accffbef0a0617c4fecbea1275.slice. Aug 13 00:45:05.527919 kubelet[2316]: E0813 00:45:05.527661 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.530802 systemd[1]: Created slice kubepods-burstable-pod488fbe60a04d07901b7f1eb470fb0757.slice - libcontainer container kubepods-burstable-pod488fbe60a04d07901b7f1eb470fb0757.slice. Aug 13 00:45:05.534355 kubelet[2316]: E0813 00:45:05.534310 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544464 kubelet[2316]: I0813 00:45:05.544070 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/488fbe60a04d07901b7f1eb470fb0757-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-8-f473d4f215\" (UID: \"488fbe60a04d07901b7f1eb470fb0757\") " pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544464 kubelet[2316]: I0813 00:45:05.544136 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/488fbe60a04d07901b7f1eb470fb0757-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-8-f473d4f215\" (UID: \"488fbe60a04d07901b7f1eb470fb0757\") " pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544464 kubelet[2316]: I0813 00:45:05.544169 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544464 kubelet[2316]: I0813 00:45:05.544197 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e1f18accffbef0a0617c4fecbea1275-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-8-f473d4f215\" (UID: \"5e1f18accffbef0a0617c4fecbea1275\") " pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544464 kubelet[2316]: I0813 00:45:05.544227 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/488fbe60a04d07901b7f1eb470fb0757-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-8-f473d4f215\" (UID: \"488fbe60a04d07901b7f1eb470fb0757\") " pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544764 kubelet[2316]: I0813 00:45:05.544254 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544764 kubelet[2316]: I0813 00:45:05.544276 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544764 kubelet[2316]: E0813 00:45:05.544279 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.133.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-8-f473d4f215?timeout=10s\": dial tcp 146.190.133.69:6443: connect: connection refused" interval="400ms" Aug 13 00:45:05.544764 kubelet[2316]: I0813 00:45:05.544299 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.544764 kubelet[2316]: I0813 00:45:05.544326 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.551037 kubelet[2316]: I0813 00:45:05.550999 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.551495 kubelet[2316]: E0813 00:45:05.551462 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.133.69:6443/api/v1/nodes\": dial tcp 146.190.133.69:6443: connect: connection refused" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.753444 kubelet[2316]: I0813 00:45:05.753208 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.754490 kubelet[2316]: E0813 00:45:05.754426 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.133.69:6443/api/v1/nodes\": dial tcp 146.190.133.69:6443: connect: connection refused" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:05.819674 kubelet[2316]: E0813 00:45:05.819617 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:05.820737 containerd[1537]: time="2025-08-13T00:45:05.820651905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-8-f473d4f215,Uid:2474733e70ed6311da0b475039ca1d74,Namespace:kube-system,Attempt:0,}" Aug 13 00:45:05.828313 kubelet[2316]: E0813 00:45:05.828259 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:05.829063 containerd[1537]: time="2025-08-13T00:45:05.828805404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-8-f473d4f215,Uid:5e1f18accffbef0a0617c4fecbea1275,Namespace:kube-system,Attempt:0,}" Aug 13 00:45:05.835220 kubelet[2316]: E0813 00:45:05.835051 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:05.836095 containerd[1537]: time="2025-08-13T00:45:05.836043842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-8-f473d4f215,Uid:488fbe60a04d07901b7f1eb470fb0757,Namespace:kube-system,Attempt:0,}" Aug 13 00:45:05.950329 kubelet[2316]: E0813 00:45:05.948498 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.133.69:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-8-f473d4f215?timeout=10s\": dial tcp 146.190.133.69:6443: connect: connection refused" interval="800ms" Aug 13 00:45:05.953636 containerd[1537]: time="2025-08-13T00:45:05.953583366Z" level=info msg="connecting to shim abf40af5953d37aaf20e1499a9cb587c239b5aac6121ea3e64484ac94acfcd79" address="unix:///run/containerd/s/179d2e047fc993d1e3ba155d2f3b59285d8d1efb0b60784f2a37a01186e41fc0" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:05.955207 containerd[1537]: time="2025-08-13T00:45:05.955088550Z" level=info msg="connecting to shim e340f9a3aff22c1cf42747bc907c09de8b68eae135c2f7ea837803c93cfe16c0" address="unix:///run/containerd/s/0e452e1cc73d48883f49d1708b4e8fea881ca3e4493cc1d90420af79c70eb115" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:05.957722 containerd[1537]: time="2025-08-13T00:45:05.957670702Z" level=info msg="connecting to shim 86ba8dbda50dd22fe0f6a3dea02835fdad5a09284bd6cdbb82e9babe84deb338" address="unix:///run/containerd/s/b82c00488ad7a3a8d6199048bb892e1fc4113a4736781783c8afeffe46a3393b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:06.069704 systemd[1]: Started cri-containerd-86ba8dbda50dd22fe0f6a3dea02835fdad5a09284bd6cdbb82e9babe84deb338.scope - libcontainer container 86ba8dbda50dd22fe0f6a3dea02835fdad5a09284bd6cdbb82e9babe84deb338. Aug 13 00:45:06.074185 systemd[1]: Started cri-containerd-e340f9a3aff22c1cf42747bc907c09de8b68eae135c2f7ea837803c93cfe16c0.scope - libcontainer container e340f9a3aff22c1cf42747bc907c09de8b68eae135c2f7ea837803c93cfe16c0. Aug 13 00:45:06.080059 systemd[1]: Started cri-containerd-abf40af5953d37aaf20e1499a9cb587c239b5aac6121ea3e64484ac94acfcd79.scope - libcontainer container abf40af5953d37aaf20e1499a9cb587c239b5aac6121ea3e64484ac94acfcd79. Aug 13 00:45:06.159167 kubelet[2316]: I0813 00:45:06.158593 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:06.159167 kubelet[2316]: E0813 00:45:06.158937 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.133.69:6443/api/v1/nodes\": dial tcp 146.190.133.69:6443: connect: connection refused" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:06.176110 containerd[1537]: time="2025-08-13T00:45:06.176045804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-8-f473d4f215,Uid:488fbe60a04d07901b7f1eb470fb0757,Namespace:kube-system,Attempt:0,} returns sandbox id \"86ba8dbda50dd22fe0f6a3dea02835fdad5a09284bd6cdbb82e9babe84deb338\"" Aug 13 00:45:06.178992 kubelet[2316]: E0813 00:45:06.178955 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:06.191928 containerd[1537]: time="2025-08-13T00:45:06.191791496Z" level=info msg="CreateContainer within sandbox \"86ba8dbda50dd22fe0f6a3dea02835fdad5a09284bd6cdbb82e9babe84deb338\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:45:06.205164 containerd[1537]: time="2025-08-13T00:45:06.205078920Z" level=info msg="Container 57d34251398629f36d5e88aea9d98837dbfa012afa070e0c9fc698ff74dad1d1: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:06.206998 containerd[1537]: time="2025-08-13T00:45:06.206935224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-8-f473d4f215,Uid:5e1f18accffbef0a0617c4fecbea1275,Namespace:kube-system,Attempt:0,} returns sandbox id \"e340f9a3aff22c1cf42747bc907c09de8b68eae135c2f7ea837803c93cfe16c0\"" Aug 13 00:45:06.210071 kubelet[2316]: E0813 00:45:06.210031 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:06.213066 containerd[1537]: time="2025-08-13T00:45:06.212662070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-8-f473d4f215,Uid:2474733e70ed6311da0b475039ca1d74,Namespace:kube-system,Attempt:0,} returns sandbox id \"abf40af5953d37aaf20e1499a9cb587c239b5aac6121ea3e64484ac94acfcd79\"" Aug 13 00:45:06.214181 kubelet[2316]: E0813 00:45:06.214154 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:06.215082 containerd[1537]: time="2025-08-13T00:45:06.214871693Z" level=info msg="CreateContainer within sandbox \"e340f9a3aff22c1cf42747bc907c09de8b68eae135c2f7ea837803c93cfe16c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:45:06.223415 containerd[1537]: time="2025-08-13T00:45:06.222353806Z" level=info msg="Container a923656360f1ccc052f6af65827c3adc21293e2923fec1ce3cc4c4046aeb7c20: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:06.232508 containerd[1537]: time="2025-08-13T00:45:06.232464856Z" level=info msg="CreateContainer within sandbox \"86ba8dbda50dd22fe0f6a3dea02835fdad5a09284bd6cdbb82e9babe84deb338\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57d34251398629f36d5e88aea9d98837dbfa012afa070e0c9fc698ff74dad1d1\"" Aug 13 00:45:06.234520 containerd[1537]: time="2025-08-13T00:45:06.234473309Z" level=info msg="StartContainer for \"57d34251398629f36d5e88aea9d98837dbfa012afa070e0c9fc698ff74dad1d1\"" Aug 13 00:45:06.236559 containerd[1537]: time="2025-08-13T00:45:06.235214695Z" level=info msg="CreateContainer within sandbox \"abf40af5953d37aaf20e1499a9cb587c239b5aac6121ea3e64484ac94acfcd79\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:45:06.236559 containerd[1537]: time="2025-08-13T00:45:06.236464210Z" level=info msg="connecting to shim 57d34251398629f36d5e88aea9d98837dbfa012afa070e0c9fc698ff74dad1d1" address="unix:///run/containerd/s/b82c00488ad7a3a8d6199048bb892e1fc4113a4736781783c8afeffe46a3393b" protocol=ttrpc version=3 Aug 13 00:45:06.241390 containerd[1537]: time="2025-08-13T00:45:06.241321215Z" level=info msg="CreateContainer within sandbox \"e340f9a3aff22c1cf42747bc907c09de8b68eae135c2f7ea837803c93cfe16c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a923656360f1ccc052f6af65827c3adc21293e2923fec1ce3cc4c4046aeb7c20\"" Aug 13 00:45:06.242153 containerd[1537]: time="2025-08-13T00:45:06.242103772Z" level=info msg="StartContainer for \"a923656360f1ccc052f6af65827c3adc21293e2923fec1ce3cc4c4046aeb7c20\"" Aug 13 00:45:06.243763 containerd[1537]: time="2025-08-13T00:45:06.243728565Z" level=info msg="connecting to shim a923656360f1ccc052f6af65827c3adc21293e2923fec1ce3cc4c4046aeb7c20" address="unix:///run/containerd/s/0e452e1cc73d48883f49d1708b4e8fea881ca3e4493cc1d90420af79c70eb115" protocol=ttrpc version=3 Aug 13 00:45:06.253581 containerd[1537]: time="2025-08-13T00:45:06.253496640Z" level=info msg="Container 06edf5b5983b6b192ed6c83bfe4235adb5b65657aa28e123719177f1c18e82cf: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:06.262177 kubelet[2316]: W0813 00:45:06.262092 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.133.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-8-f473d4f215&limit=500&resourceVersion=0": dial tcp 146.190.133.69:6443: connect: connection refused Aug 13 00:45:06.262490 kubelet[2316]: E0813 00:45:06.262465 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.133.69:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-8-f473d4f215&limit=500&resourceVersion=0\": dial tcp 146.190.133.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:45:06.269263 containerd[1537]: time="2025-08-13T00:45:06.269192071Z" level=info msg="CreateContainer within sandbox \"abf40af5953d37aaf20e1499a9cb587c239b5aac6121ea3e64484ac94acfcd79\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06edf5b5983b6b192ed6c83bfe4235adb5b65657aa28e123719177f1c18e82cf\"" Aug 13 00:45:06.270677 systemd[1]: Started cri-containerd-57d34251398629f36d5e88aea9d98837dbfa012afa070e0c9fc698ff74dad1d1.scope - libcontainer container 57d34251398629f36d5e88aea9d98837dbfa012afa070e0c9fc698ff74dad1d1. Aug 13 00:45:06.272448 containerd[1537]: time="2025-08-13T00:45:06.272295717Z" level=info msg="StartContainer for \"06edf5b5983b6b192ed6c83bfe4235adb5b65657aa28e123719177f1c18e82cf\"" Aug 13 00:45:06.282920 containerd[1537]: time="2025-08-13T00:45:06.282817164Z" level=info msg="connecting to shim 06edf5b5983b6b192ed6c83bfe4235adb5b65657aa28e123719177f1c18e82cf" address="unix:///run/containerd/s/179d2e047fc993d1e3ba155d2f3b59285d8d1efb0b60784f2a37a01186e41fc0" protocol=ttrpc version=3 Aug 13 00:45:06.286727 systemd[1]: Started cri-containerd-a923656360f1ccc052f6af65827c3adc21293e2923fec1ce3cc4c4046aeb7c20.scope - libcontainer container a923656360f1ccc052f6af65827c3adc21293e2923fec1ce3cc4c4046aeb7c20. Aug 13 00:45:06.328007 kubelet[2316]: W0813 00:45:06.325355 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.133.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.133.69:6443: connect: connection refused Aug 13 00:45:06.328007 kubelet[2316]: E0813 00:45:06.327552 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.133.69:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.133.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:45:06.336676 systemd[1]: Started cri-containerd-06edf5b5983b6b192ed6c83bfe4235adb5b65657aa28e123719177f1c18e82cf.scope - libcontainer container 06edf5b5983b6b192ed6c83bfe4235adb5b65657aa28e123719177f1c18e82cf. Aug 13 00:45:06.378249 containerd[1537]: time="2025-08-13T00:45:06.378049827Z" level=info msg="StartContainer for \"57d34251398629f36d5e88aea9d98837dbfa012afa070e0c9fc698ff74dad1d1\" returns successfully" Aug 13 00:45:06.412462 kubelet[2316]: E0813 00:45:06.412407 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:06.413563 kubelet[2316]: E0813 00:45:06.413521 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:06.448106 containerd[1537]: time="2025-08-13T00:45:06.448052039Z" level=info msg="StartContainer for \"a923656360f1ccc052f6af65827c3adc21293e2923fec1ce3cc4c4046aeb7c20\" returns successfully" Aug 13 00:45:06.470468 containerd[1537]: time="2025-08-13T00:45:06.469996032Z" level=info msg="StartContainer for \"06edf5b5983b6b192ed6c83bfe4235adb5b65657aa28e123719177f1c18e82cf\" returns successfully" Aug 13 00:45:06.594534 kubelet[2316]: W0813 00:45:06.593720 2316 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.133.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.133.69:6443: connect: connection refused Aug 13 00:45:06.594534 kubelet[2316]: E0813 00:45:06.593822 2316 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.133.69:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.133.69:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:45:06.962682 kubelet[2316]: I0813 00:45:06.962602 2316 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:07.430769 kubelet[2316]: E0813 00:45:07.430693 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:07.431692 kubelet[2316]: E0813 00:45:07.430874 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:07.440482 kubelet[2316]: E0813 00:45:07.439965 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:07.440482 kubelet[2316]: E0813 00:45:07.440223 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:07.441188 kubelet[2316]: E0813 00:45:07.441056 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:07.442281 kubelet[2316]: E0813 00:45:07.441732 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:08.443076 kubelet[2316]: E0813 00:45:08.442480 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.443076 kubelet[2316]: E0813 00:45:08.442547 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.443076 kubelet[2316]: E0813 00:45:08.442660 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:08.443076 kubelet[2316]: E0813 00:45:08.442668 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:08.446877 kubelet[2316]: E0813 00:45:08.444251 2316 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.448407 kubelet[2316]: E0813 00:45:08.447489 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:08.755974 kubelet[2316]: E0813 00:45:08.755513 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-8-f473d4f215\" not found" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.873612 kubelet[2316]: I0813 00:45:08.873464 2316 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.873612 kubelet[2316]: E0813 00:45:08.873516 2316 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.1.0-8-f473d4f215\": node \"ci-4372.1.0-8-f473d4f215\" not found" Aug 13 00:45:08.932593 kubelet[2316]: I0813 00:45:08.932537 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.942049 kubelet[2316]: E0813 00:45:08.941984 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-8-f473d4f215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.942049 kubelet[2316]: I0813 00:45:08.942026 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.946583 kubelet[2316]: E0813 00:45:08.946541 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.946583 kubelet[2316]: I0813 00:45:08.946573 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:08.949005 kubelet[2316]: E0813 00:45:08.948957 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-8-f473d4f215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:09.318960 kubelet[2316]: I0813 00:45:09.318912 2316 apiserver.go:52] "Watching apiserver" Aug 13 00:45:09.342936 kubelet[2316]: I0813 00:45:09.342879 2316 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:45:09.443232 kubelet[2316]: I0813 00:45:09.443025 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:09.443232 kubelet[2316]: I0813 00:45:09.443067 2316 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:09.446202 kubelet[2316]: E0813 00:45:09.446099 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-8-f473d4f215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:09.446652 kubelet[2316]: E0813 00:45:09.446555 2316 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:09.446863 kubelet[2316]: E0813 00:45:09.446799 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:09.446940 kubelet[2316]: E0813 00:45:09.446923 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:11.038661 systemd[1]: Reload requested from client PID 2591 ('systemctl') (unit session-7.scope)... Aug 13 00:45:11.038679 systemd[1]: Reloading... Aug 13 00:45:11.190408 zram_generator::config[2643]: No configuration found. Aug 13 00:45:11.308835 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:45:11.487622 systemd[1]: Reloading finished in 448 ms. Aug 13 00:45:11.526463 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:45:11.542221 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:45:11.542655 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:45:11.542743 systemd[1]: kubelet.service: Consumed 1.259s CPU time, 125.9M memory peak. Aug 13 00:45:11.545866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:45:11.722569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:45:11.736463 (kubelet)[2685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:45:11.804145 kubelet[2685]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:45:11.805147 kubelet[2685]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:45:11.805147 kubelet[2685]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:45:11.805147 kubelet[2685]: I0813 00:45:11.804440 2685 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:45:11.818444 kubelet[2685]: I0813 00:45:11.818225 2685 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:45:11.818444 kubelet[2685]: I0813 00:45:11.818260 2685 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:45:11.818682 kubelet[2685]: I0813 00:45:11.818641 2685 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:45:11.827649 kubelet[2685]: I0813 00:45:11.827598 2685 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:45:11.840724 kubelet[2685]: I0813 00:45:11.840173 2685 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:45:11.853224 kubelet[2685]: I0813 00:45:11.853195 2685 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:45:11.861099 kubelet[2685]: I0813 00:45:11.861058 2685 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:45:11.861705 kubelet[2685]: I0813 00:45:11.861661 2685 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:45:11.862218 kubelet[2685]: I0813 00:45:11.861822 2685 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-8-f473d4f215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:45:11.862893 kubelet[2685]: I0813 00:45:11.862538 2685 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:45:11.862893 kubelet[2685]: I0813 00:45:11.862580 2685 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:45:11.862893 kubelet[2685]: I0813 00:45:11.862689 2685 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:45:11.863182 kubelet[2685]: I0813 00:45:11.863168 2685 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:45:11.863281 kubelet[2685]: I0813 00:45:11.863269 2685 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:45:11.863349 kubelet[2685]: I0813 00:45:11.863341 2685 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:45:11.863438 kubelet[2685]: I0813 00:45:11.863428 2685 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:45:11.876628 kubelet[2685]: I0813 00:45:11.874785 2685 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:45:11.876628 kubelet[2685]: I0813 00:45:11.875360 2685 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:45:11.879028 kubelet[2685]: I0813 00:45:11.877435 2685 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:45:11.879028 kubelet[2685]: I0813 00:45:11.877496 2685 server.go:1287] "Started kubelet" Aug 13 00:45:11.883265 kubelet[2685]: I0813 00:45:11.882459 2685 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:45:11.889079 kubelet[2685]: E0813 00:45:11.889043 2685 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:45:11.892649 kubelet[2685]: I0813 00:45:11.892615 2685 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:45:11.893195 kubelet[2685]: I0813 00:45:11.893135 2685 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:45:11.894333 kubelet[2685]: I0813 00:45:11.894210 2685 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:45:11.894695 kubelet[2685]: I0813 00:45:11.894679 2685 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:45:11.894977 kubelet[2685]: I0813 00:45:11.894962 2685 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:45:11.895381 kubelet[2685]: I0813 00:45:11.895354 2685 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:45:11.895924 kubelet[2685]: I0813 00:45:11.895841 2685 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:45:11.896791 kubelet[2685]: I0813 00:45:11.896767 2685 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:45:11.905637 kubelet[2685]: I0813 00:45:11.905589 2685 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:45:11.909124 kubelet[2685]: I0813 00:45:11.909072 2685 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:45:11.909124 kubelet[2685]: I0813 00:45:11.909099 2685 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:45:11.927596 kubelet[2685]: I0813 00:45:11.927535 2685 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:45:11.930126 kubelet[2685]: I0813 00:45:11.930069 2685 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:45:11.930126 kubelet[2685]: I0813 00:45:11.930100 2685 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:45:11.930126 kubelet[2685]: I0813 00:45:11.930124 2685 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:45:11.930126 kubelet[2685]: I0813 00:45:11.930132 2685 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:45:11.930463 kubelet[2685]: E0813 00:45:11.930183 2685 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:45:11.971241 kubelet[2685]: I0813 00:45:11.971205 2685 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:45:11.971511 kubelet[2685]: I0813 00:45:11.971464 2685 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:45:11.971650 kubelet[2685]: I0813 00:45:11.971637 2685 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:45:11.971991 kubelet[2685]: I0813 00:45:11.971967 2685 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:45:11.972124 kubelet[2685]: I0813 00:45:11.972084 2685 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:45:11.972205 kubelet[2685]: I0813 00:45:11.972194 2685 policy_none.go:49] "None policy: Start" Aug 13 00:45:11.972280 kubelet[2685]: I0813 00:45:11.972270 2685 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:45:11.972354 kubelet[2685]: I0813 00:45:11.972345 2685 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:45:11.972614 kubelet[2685]: I0813 00:45:11.972594 2685 state_mem.go:75] "Updated machine memory state" Aug 13 00:45:11.979415 kubelet[2685]: I0813 00:45:11.979281 2685 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:45:11.980754 kubelet[2685]: I0813 00:45:11.980729 2685 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:45:11.980845 kubelet[2685]: I0813 00:45:11.980751 2685 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:45:11.981728 kubelet[2685]: I0813 00:45:11.981674 2685 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:45:11.985521 kubelet[2685]: E0813 00:45:11.985486 2685 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:45:12.032981 kubelet[2685]: I0813 00:45:12.031970 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.034901 kubelet[2685]: I0813 00:45:12.034864 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.036879 kubelet[2685]: I0813 00:45:12.036663 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.044977 kubelet[2685]: W0813 00:45:12.044938 2685 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:45:12.046409 kubelet[2685]: W0813 00:45:12.045012 2685 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:45:12.050257 kubelet[2685]: W0813 00:45:12.049321 2685 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:45:12.089091 kubelet[2685]: I0813 00:45:12.086906 2685 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.096470 kubelet[2685]: I0813 00:45:12.096282 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.096470 kubelet[2685]: I0813 00:45:12.096320 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.096470 kubelet[2685]: I0813 00:45:12.096345 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.097152 kubelet[2685]: I0813 00:45:12.096363 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/488fbe60a04d07901b7f1eb470fb0757-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-8-f473d4f215\" (UID: \"488fbe60a04d07901b7f1eb470fb0757\") " pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.097853 kubelet[2685]: I0813 00:45:12.097429 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/488fbe60a04d07901b7f1eb470fb0757-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-8-f473d4f215\" (UID: \"488fbe60a04d07901b7f1eb470fb0757\") " pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.098859 kubelet[2685]: I0813 00:45:12.098180 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.099237 kubelet[2685]: I0813 00:45:12.099128 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2474733e70ed6311da0b475039ca1d74-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-8-f473d4f215\" (UID: \"2474733e70ed6311da0b475039ca1d74\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.100311 kubelet[2685]: I0813 00:45:12.100228 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e1f18accffbef0a0617c4fecbea1275-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-8-f473d4f215\" (UID: \"5e1f18accffbef0a0617c4fecbea1275\") " pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.100311 kubelet[2685]: I0813 00:45:12.100274 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/488fbe60a04d07901b7f1eb470fb0757-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-8-f473d4f215\" (UID: \"488fbe60a04d07901b7f1eb470fb0757\") " pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.102073 kubelet[2685]: I0813 00:45:12.101975 2685 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.102319 kubelet[2685]: I0813 00:45:12.102235 2685 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.347708 kubelet[2685]: E0813 00:45:12.347591 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:12.349401 kubelet[2685]: E0813 00:45:12.348152 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:12.349902 kubelet[2685]: E0813 00:45:12.349858 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:12.865407 kubelet[2685]: I0813 00:45:12.864993 2685 apiserver.go:52] "Watching apiserver" Aug 13 00:45:12.895840 kubelet[2685]: I0813 00:45:12.895778 2685 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:45:12.957347 kubelet[2685]: E0813 00:45:12.954786 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:12.957347 kubelet[2685]: I0813 00:45:12.954931 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.957673 kubelet[2685]: I0813 00:45:12.957639 2685 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.964058 kubelet[2685]: W0813 00:45:12.964021 2685 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:45:12.964467 kubelet[2685]: E0813 00:45:12.964346 2685 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.1.0-8-f473d4f215\" already exists" pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.965006 kubelet[2685]: E0813 00:45:12.964804 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:12.974704 kubelet[2685]: W0813 00:45:12.974648 2685 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:45:12.974892 kubelet[2685]: E0813 00:45:12.974740 2685 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.1.0-8-f473d4f215\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" Aug 13 00:45:12.975187 kubelet[2685]: E0813 00:45:12.974965 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:13.005081 kubelet[2685]: I0813 00:45:13.004999 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-8-f473d4f215" podStartSLOduration=1.004974317 podStartE2EDuration="1.004974317s" podCreationTimestamp="2025-08-13 00:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:45:12.99088482 +0000 UTC m=+1.246922162" watchObservedRunningTime="2025-08-13 00:45:13.004974317 +0000 UTC m=+1.261011654" Aug 13 00:45:13.018328 kubelet[2685]: I0813 00:45:13.018207 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-8-f473d4f215" podStartSLOduration=1.017364023 podStartE2EDuration="1.017364023s" podCreationTimestamp="2025-08-13 00:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:45:13.005494169 +0000 UTC m=+1.261531522" watchObservedRunningTime="2025-08-13 00:45:13.017364023 +0000 UTC m=+1.273401373" Aug 13 00:45:13.032199 kubelet[2685]: I0813 00:45:13.032124 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-8-f473d4f215" podStartSLOduration=1.032103446 podStartE2EDuration="1.032103446s" podCreationTimestamp="2025-08-13 00:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:45:13.018865404 +0000 UTC m=+1.274902746" watchObservedRunningTime="2025-08-13 00:45:13.032103446 +0000 UTC m=+1.288140789" Aug 13 00:45:13.957367 kubelet[2685]: E0813 00:45:13.957143 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:13.957987 kubelet[2685]: E0813 00:45:13.957740 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:16.860225 kubelet[2685]: I0813 00:45:16.860157 2685 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:45:16.861154 containerd[1537]: time="2025-08-13T00:45:16.861106118Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:45:16.862204 kubelet[2685]: I0813 00:45:16.861956 2685 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:45:17.599942 kubelet[2685]: I0813 00:45:17.599712 2685 status_manager.go:890] "Failed to get status for pod" podUID="a930f92e-82f3-4691-aba9-8a09cf5cd347" pod="kube-system/kube-proxy-kcxvj" err="pods \"kube-proxy-kcxvj\" is forbidden: User \"system:node:ci-4372.1.0-8-f473d4f215\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372.1.0-8-f473d4f215' and this object" Aug 13 00:45:17.609252 systemd[1]: Created slice kubepods-besteffort-poda930f92e_82f3_4691_aba9_8a09cf5cd347.slice - libcontainer container kubepods-besteffort-poda930f92e_82f3_4691_aba9_8a09cf5cd347.slice. Aug 13 00:45:17.642717 kubelet[2685]: I0813 00:45:17.642612 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a930f92e-82f3-4691-aba9-8a09cf5cd347-lib-modules\") pod \"kube-proxy-kcxvj\" (UID: \"a930f92e-82f3-4691-aba9-8a09cf5cd347\") " pod="kube-system/kube-proxy-kcxvj" Aug 13 00:45:17.643046 kubelet[2685]: I0813 00:45:17.642910 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82n2z\" (UniqueName: \"kubernetes.io/projected/a930f92e-82f3-4691-aba9-8a09cf5cd347-kube-api-access-82n2z\") pod \"kube-proxy-kcxvj\" (UID: \"a930f92e-82f3-4691-aba9-8a09cf5cd347\") " pod="kube-system/kube-proxy-kcxvj" Aug 13 00:45:17.643046 kubelet[2685]: I0813 00:45:17.642998 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a930f92e-82f3-4691-aba9-8a09cf5cd347-xtables-lock\") pod \"kube-proxy-kcxvj\" (UID: \"a930f92e-82f3-4691-aba9-8a09cf5cd347\") " pod="kube-system/kube-proxy-kcxvj" Aug 13 00:45:17.643046 kubelet[2685]: I0813 00:45:17.643024 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a930f92e-82f3-4691-aba9-8a09cf5cd347-kube-proxy\") pod \"kube-proxy-kcxvj\" (UID: \"a930f92e-82f3-4691-aba9-8a09cf5cd347\") " pod="kube-system/kube-proxy-kcxvj" Aug 13 00:45:17.925401 kubelet[2685]: E0813 00:45:17.925292 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:17.926981 containerd[1537]: time="2025-08-13T00:45:17.926936079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kcxvj,Uid:a930f92e-82f3-4691-aba9-8a09cf5cd347,Namespace:kube-system,Attempt:0,}" Aug 13 00:45:17.956763 containerd[1537]: time="2025-08-13T00:45:17.956485362Z" level=info msg="connecting to shim 3cfea8346a72a8da8d53634bc5b5b4c654dde1b3862b4bcb56f814265c3154e4" address="unix:///run/containerd/s/e611aa8f1f6418c0bb9fc0f11e5d7a121ac0fb4bc9443506825aab9d26d593c8" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:18.004722 systemd[1]: Started cri-containerd-3cfea8346a72a8da8d53634bc5b5b4c654dde1b3862b4bcb56f814265c3154e4.scope - libcontainer container 3cfea8346a72a8da8d53634bc5b5b4c654dde1b3862b4bcb56f814265c3154e4. Aug 13 00:45:18.058201 systemd[1]: Created slice kubepods-besteffort-podd7b07b4b_e9e6_4382_83ae_0f2ef1b44151.slice - libcontainer container kubepods-besteffort-podd7b07b4b_e9e6_4382_83ae_0f2ef1b44151.slice. Aug 13 00:45:18.113436 containerd[1537]: time="2025-08-13T00:45:18.113356243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kcxvj,Uid:a930f92e-82f3-4691-aba9-8a09cf5cd347,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cfea8346a72a8da8d53634bc5b5b4c654dde1b3862b4bcb56f814265c3154e4\"" Aug 13 00:45:18.114896 kubelet[2685]: E0813 00:45:18.114862 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:18.121412 containerd[1537]: time="2025-08-13T00:45:18.120926939Z" level=info msg="CreateContainer within sandbox \"3cfea8346a72a8da8d53634bc5b5b4c654dde1b3862b4bcb56f814265c3154e4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:45:18.140137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022951345.mount: Deactivated successfully. Aug 13 00:45:18.140904 containerd[1537]: time="2025-08-13T00:45:18.140834288Z" level=info msg="Container 3cea5d8ab026de0756bb79f8a47f31d0563b827583c05765a07ebe28af174814: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:18.147311 kubelet[2685]: I0813 00:45:18.146728 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7b07b4b-e9e6-4382-83ae-0f2ef1b44151-var-lib-calico\") pod \"tigera-operator-747864d56d-z2l5b\" (UID: \"d7b07b4b-e9e6-4382-83ae-0f2ef1b44151\") " pod="tigera-operator/tigera-operator-747864d56d-z2l5b" Aug 13 00:45:18.147311 kubelet[2685]: I0813 00:45:18.146793 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scp4m\" (UniqueName: \"kubernetes.io/projected/d7b07b4b-e9e6-4382-83ae-0f2ef1b44151-kube-api-access-scp4m\") pod \"tigera-operator-747864d56d-z2l5b\" (UID: \"d7b07b4b-e9e6-4382-83ae-0f2ef1b44151\") " pod="tigera-operator/tigera-operator-747864d56d-z2l5b" Aug 13 00:45:18.158197 containerd[1537]: time="2025-08-13T00:45:18.158099011Z" level=info msg="CreateContainer within sandbox \"3cfea8346a72a8da8d53634bc5b5b4c654dde1b3862b4bcb56f814265c3154e4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3cea5d8ab026de0756bb79f8a47f31d0563b827583c05765a07ebe28af174814\"" Aug 13 00:45:18.159757 containerd[1537]: time="2025-08-13T00:45:18.159698082Z" level=info msg="StartContainer for \"3cea5d8ab026de0756bb79f8a47f31d0563b827583c05765a07ebe28af174814\"" Aug 13 00:45:18.162817 containerd[1537]: time="2025-08-13T00:45:18.162756746Z" level=info msg="connecting to shim 3cea5d8ab026de0756bb79f8a47f31d0563b827583c05765a07ebe28af174814" address="unix:///run/containerd/s/e611aa8f1f6418c0bb9fc0f11e5d7a121ac0fb4bc9443506825aab9d26d593c8" protocol=ttrpc version=3 Aug 13 00:45:18.194717 systemd[1]: Started cri-containerd-3cea5d8ab026de0756bb79f8a47f31d0563b827583c05765a07ebe28af174814.scope - libcontainer container 3cea5d8ab026de0756bb79f8a47f31d0563b827583c05765a07ebe28af174814. Aug 13 00:45:18.273406 containerd[1537]: time="2025-08-13T00:45:18.272605308Z" level=info msg="StartContainer for \"3cea5d8ab026de0756bb79f8a47f31d0563b827583c05765a07ebe28af174814\" returns successfully" Aug 13 00:45:18.362892 containerd[1537]: time="2025-08-13T00:45:18.362835629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-z2l5b,Uid:d7b07b4b-e9e6-4382-83ae-0f2ef1b44151,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:45:18.385582 containerd[1537]: time="2025-08-13T00:45:18.385517087Z" level=info msg="connecting to shim 15e58465d518444ec7866f71862cd8dd2a2884d4330456ded370d67cf2ca1045" address="unix:///run/containerd/s/138fb3c3e062643dd85547c54d14209e10002fa3cf036e394c04691dd29e67da" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:18.432724 systemd[1]: Started cri-containerd-15e58465d518444ec7866f71862cd8dd2a2884d4330456ded370d67cf2ca1045.scope - libcontainer container 15e58465d518444ec7866f71862cd8dd2a2884d4330456ded370d67cf2ca1045. Aug 13 00:45:18.515499 containerd[1537]: time="2025-08-13T00:45:18.515333540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-z2l5b,Uid:d7b07b4b-e9e6-4382-83ae-0f2ef1b44151,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"15e58465d518444ec7866f71862cd8dd2a2884d4330456ded370d67cf2ca1045\"" Aug 13 00:45:18.519308 containerd[1537]: time="2025-08-13T00:45:18.519264795Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:45:18.523967 systemd-resolved[1396]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 00:45:19.320644 systemd-resolved[1396]: Clock change detected. Flushing caches. Aug 13 00:45:19.321020 systemd-timesyncd[1414]: Contacted time server 205.233.73.201:123 (2.flatcar.pool.ntp.org). Aug 13 00:45:19.321091 systemd-timesyncd[1414]: Initial clock synchronization to Wed 2025-08-13 00:45:19.320462 UTC. Aug 13 00:45:19.397825 kubelet[2685]: E0813 00:45:19.397599 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:20.535675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585267935.mount: Deactivated successfully. Aug 13 00:45:21.064297 kubelet[2685]: E0813 00:45:21.064221 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:21.108944 kubelet[2685]: I0813 00:45:21.108712 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kcxvj" podStartSLOduration=4.108683858 podStartE2EDuration="4.108683858s" podCreationTimestamp="2025-08-13 00:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:45:19.41076989 +0000 UTC m=+7.247926055" watchObservedRunningTime="2025-08-13 00:45:21.108683858 +0000 UTC m=+8.945840025" Aug 13 00:45:21.404384 kubelet[2685]: E0813 00:45:21.404198 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:21.525723 containerd[1537]: time="2025-08-13T00:45:21.525353801Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:21.526507 containerd[1537]: time="2025-08-13T00:45:21.526448717Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 00:45:21.527178 containerd[1537]: time="2025-08-13T00:45:21.527092606Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:21.531298 containerd[1537]: time="2025-08-13T00:45:21.531152473Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:21.532323 containerd[1537]: time="2025-08-13T00:45:21.531758416Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.593569874s" Aug 13 00:45:21.532323 containerd[1537]: time="2025-08-13T00:45:21.531792263Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:45:21.537044 containerd[1537]: time="2025-08-13T00:45:21.536931684Z" level=info msg="CreateContainer within sandbox \"15e58465d518444ec7866f71862cd8dd2a2884d4330456ded370d67cf2ca1045\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:45:21.548298 containerd[1537]: time="2025-08-13T00:45:21.546947112Z" level=info msg="Container bab80b4233f9918b163e9a8cb36015b44d185e78bded4b75f0acac223777f41c: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:21.558850 containerd[1537]: time="2025-08-13T00:45:21.558007918Z" level=info msg="CreateContainer within sandbox \"15e58465d518444ec7866f71862cd8dd2a2884d4330456ded370d67cf2ca1045\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bab80b4233f9918b163e9a8cb36015b44d185e78bded4b75f0acac223777f41c\"" Aug 13 00:45:21.560705 containerd[1537]: time="2025-08-13T00:45:21.560635865Z" level=info msg="StartContainer for \"bab80b4233f9918b163e9a8cb36015b44d185e78bded4b75f0acac223777f41c\"" Aug 13 00:45:21.562983 containerd[1537]: time="2025-08-13T00:45:21.562929578Z" level=info msg="connecting to shim bab80b4233f9918b163e9a8cb36015b44d185e78bded4b75f0acac223777f41c" address="unix:///run/containerd/s/138fb3c3e062643dd85547c54d14209e10002fa3cf036e394c04691dd29e67da" protocol=ttrpc version=3 Aug 13 00:45:21.598605 systemd[1]: Started cri-containerd-bab80b4233f9918b163e9a8cb36015b44d185e78bded4b75f0acac223777f41c.scope - libcontainer container bab80b4233f9918b163e9a8cb36015b44d185e78bded4b75f0acac223777f41c. Aug 13 00:45:21.655887 containerd[1537]: time="2025-08-13T00:45:21.655726402Z" level=info msg="StartContainer for \"bab80b4233f9918b163e9a8cb36015b44d185e78bded4b75f0acac223777f41c\" returns successfully" Aug 13 00:45:22.246287 kubelet[2685]: E0813 00:45:22.246229 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:22.400774 kubelet[2685]: E0813 00:45:22.400731 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:22.423304 kubelet[2685]: E0813 00:45:22.421296 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:22.423690 kubelet[2685]: E0813 00:45:22.423646 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:27.428786 update_engine[1517]: I20250813 00:45:27.428695 1517 update_attempter.cc:509] Updating boot flags... Aug 13 00:45:29.017995 sudo[1766]: pam_unix(sudo:session): session closed for user root Aug 13 00:45:29.022393 sshd[1765]: Connection closed by 139.178.68.195 port 47964 Aug 13 00:45:29.022186 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Aug 13 00:45:29.030977 systemd-logind[1516]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:45:29.032003 systemd[1]: sshd@6-146.190.133.69:22-139.178.68.195:47964.service: Deactivated successfully. Aug 13 00:45:29.036671 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:45:29.037812 systemd[1]: session-7.scope: Consumed 5.929s CPU time, 158.8M memory peak. Aug 13 00:45:29.043446 systemd-logind[1516]: Removed session 7. Aug 13 00:45:33.819282 kubelet[2685]: I0813 00:45:33.819093 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-z2l5b" podStartSLOduration=14.222729708 podStartE2EDuration="16.819053176s" podCreationTimestamp="2025-08-13 00:45:17 +0000 UTC" firstStartedPulling="2025-08-13 00:45:18.518108877 +0000 UTC m=+6.774146215" lastFinishedPulling="2025-08-13 00:45:21.533313525 +0000 UTC m=+9.370469683" observedRunningTime="2025-08-13 00:45:22.499227558 +0000 UTC m=+10.336383736" watchObservedRunningTime="2025-08-13 00:45:33.819053176 +0000 UTC m=+21.656209336" Aug 13 00:45:33.832814 systemd[1]: Created slice kubepods-besteffort-pod7cfdd5a4_8f0e_482f_b9c9_1f1a24071ae7.slice - libcontainer container kubepods-besteffort-pod7cfdd5a4_8f0e_482f_b9c9_1f1a24071ae7.slice. Aug 13 00:45:33.878284 kubelet[2685]: I0813 00:45:33.878225 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbxjh\" (UniqueName: \"kubernetes.io/projected/7cfdd5a4-8f0e-482f-b9c9-1f1a24071ae7-kube-api-access-dbxjh\") pod \"calico-typha-76c948bf55-jqw6h\" (UID: \"7cfdd5a4-8f0e-482f-b9c9-1f1a24071ae7\") " pod="calico-system/calico-typha-76c948bf55-jqw6h" Aug 13 00:45:33.878568 kubelet[2685]: I0813 00:45:33.878543 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7cfdd5a4-8f0e-482f-b9c9-1f1a24071ae7-typha-certs\") pod \"calico-typha-76c948bf55-jqw6h\" (UID: \"7cfdd5a4-8f0e-482f-b9c9-1f1a24071ae7\") " pod="calico-system/calico-typha-76c948bf55-jqw6h" Aug 13 00:45:33.878940 kubelet[2685]: I0813 00:45:33.878912 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cfdd5a4-8f0e-482f-b9c9-1f1a24071ae7-tigera-ca-bundle\") pod \"calico-typha-76c948bf55-jqw6h\" (UID: \"7cfdd5a4-8f0e-482f-b9c9-1f1a24071ae7\") " pod="calico-system/calico-typha-76c948bf55-jqw6h" Aug 13 00:45:34.142160 kubelet[2685]: E0813 00:45:34.141630 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:34.144617 containerd[1537]: time="2025-08-13T00:45:34.142961087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c948bf55-jqw6h,Uid:7cfdd5a4-8f0e-482f-b9c9-1f1a24071ae7,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:34.184471 containerd[1537]: time="2025-08-13T00:45:34.184186150Z" level=info msg="connecting to shim 2b697a3d5b4b6c944c8e017e42b03fae07543aa2499c94928027ee2e9333ede9" address="unix:///run/containerd/s/716f332c0dea7cdf3fed0744af7ae586b72a8b0a6a9f9c0ca92e28c51123a4be" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:34.255784 systemd[1]: Started cri-containerd-2b697a3d5b4b6c944c8e017e42b03fae07543aa2499c94928027ee2e9333ede9.scope - libcontainer container 2b697a3d5b4b6c944c8e017e42b03fae07543aa2499c94928027ee2e9333ede9. Aug 13 00:45:34.370797 systemd[1]: Created slice kubepods-besteffort-podcd4dce00_90b8_43e1_8c5a_81618e231ca6.slice - libcontainer container kubepods-besteffort-podcd4dce00_90b8_43e1_8c5a_81618e231ca6.slice. Aug 13 00:45:34.384503 kubelet[2685]: I0813 00:45:34.384447 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-cni-bin-dir\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384503 kubelet[2685]: I0813 00:45:34.384505 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-cni-log-dir\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384761 kubelet[2685]: I0813 00:45:34.384537 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42lbr\" (UniqueName: \"kubernetes.io/projected/cd4dce00-90b8-43e1-8c5a-81618e231ca6-kube-api-access-42lbr\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384761 kubelet[2685]: I0813 00:45:34.384573 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-policysync\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384761 kubelet[2685]: I0813 00:45:34.384599 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-var-run-calico\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384761 kubelet[2685]: I0813 00:45:34.384625 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-flexvol-driver-host\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384761 kubelet[2685]: I0813 00:45:34.384650 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd4dce00-90b8-43e1-8c5a-81618e231ca6-tigera-ca-bundle\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384954 kubelet[2685]: I0813 00:45:34.384693 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-lib-modules\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384954 kubelet[2685]: I0813 00:45:34.384718 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cd4dce00-90b8-43e1-8c5a-81618e231ca6-node-certs\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384954 kubelet[2685]: I0813 00:45:34.384740 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-var-lib-calico\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384954 kubelet[2685]: I0813 00:45:34.384769 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-cni-net-dir\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.384954 kubelet[2685]: I0813 00:45:34.384797 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd4dce00-90b8-43e1-8c5a-81618e231ca6-xtables-lock\") pod \"calico-node-bsx7h\" (UID: \"cd4dce00-90b8-43e1-8c5a-81618e231ca6\") " pod="calico-system/calico-node-bsx7h" Aug 13 00:45:34.462272 containerd[1537]: time="2025-08-13T00:45:34.460721670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c948bf55-jqw6h,Uid:7cfdd5a4-8f0e-482f-b9c9-1f1a24071ae7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b697a3d5b4b6c944c8e017e42b03fae07543aa2499c94928027ee2e9333ede9\"" Aug 13 00:45:34.463947 kubelet[2685]: E0813 00:45:34.463906 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:34.465950 containerd[1537]: time="2025-08-13T00:45:34.465914189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:45:34.490356 kubelet[2685]: E0813 00:45:34.490112 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.490356 kubelet[2685]: W0813 00:45:34.490239 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.491978 kubelet[2685]: E0813 00:45:34.491921 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.495697 kubelet[2685]: E0813 00:45:34.495550 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.495697 kubelet[2685]: W0813 00:45:34.495576 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.496412 kubelet[2685]: E0813 00:45:34.495599 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.505478 kubelet[2685]: E0813 00:45:34.505408 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.505478 kubelet[2685]: W0813 00:45:34.505430 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.505478 kubelet[2685]: E0813 00:45:34.505450 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.566948 kubelet[2685]: E0813 00:45:34.566632 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8znj8" podUID="d41150c4-c6b0-44d4-8583-33beadfff0f0" Aug 13 00:45:34.664572 kubelet[2685]: E0813 00:45:34.664532 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.664572 kubelet[2685]: W0813 00:45:34.664559 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.665149 kubelet[2685]: E0813 00:45:34.664587 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.665149 kubelet[2685]: E0813 00:45:34.665077 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.665293 kubelet[2685]: W0813 00:45:34.665091 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.665406 kubelet[2685]: E0813 00:45:34.665300 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.665629 kubelet[2685]: E0813 00:45:34.665558 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.665629 kubelet[2685]: W0813 00:45:34.665567 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.666563 kubelet[2685]: E0813 00:45:34.665579 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.666997 kubelet[2685]: E0813 00:45:34.666976 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.666997 kubelet[2685]: W0813 00:45:34.666992 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.667212 kubelet[2685]: E0813 00:45:34.667016 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.667421 kubelet[2685]: E0813 00:45:34.667402 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.667421 kubelet[2685]: W0813 00:45:34.667421 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.667513 kubelet[2685]: E0813 00:45:34.667434 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.667689 kubelet[2685]: E0813 00:45:34.667673 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.667689 kubelet[2685]: W0813 00:45:34.667686 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.667882 kubelet[2685]: E0813 00:45:34.667713 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.668022 kubelet[2685]: E0813 00:45:34.668002 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.668022 kubelet[2685]: W0813 00:45:34.668021 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.668103 kubelet[2685]: E0813 00:45:34.668032 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.668493 kubelet[2685]: E0813 00:45:34.668477 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.668493 kubelet[2685]: W0813 00:45:34.668491 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.668803 kubelet[2685]: E0813 00:45:34.668502 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.669213 kubelet[2685]: E0813 00:45:34.669182 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.669213 kubelet[2685]: W0813 00:45:34.669208 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.669213 kubelet[2685]: E0813 00:45:34.669219 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.669536 kubelet[2685]: E0813 00:45:34.669500 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.669536 kubelet[2685]: W0813 00:45:34.669509 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.669536 kubelet[2685]: E0813 00:45:34.669520 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.669722 kubelet[2685]: E0813 00:45:34.669707 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.669722 kubelet[2685]: W0813 00:45:34.669720 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.669722 kubelet[2685]: E0813 00:45:34.669732 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.670520 kubelet[2685]: E0813 00:45:34.670500 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.670520 kubelet[2685]: W0813 00:45:34.670517 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.670609 kubelet[2685]: E0813 00:45:34.670532 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.670912 kubelet[2685]: E0813 00:45:34.670887 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.670912 kubelet[2685]: W0813 00:45:34.670904 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.671184 kubelet[2685]: E0813 00:45:34.670919 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.671239 kubelet[2685]: E0813 00:45:34.671230 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.671455 kubelet[2685]: W0813 00:45:34.671244 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.671455 kubelet[2685]: E0813 00:45:34.671274 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.671525 kubelet[2685]: E0813 00:45:34.671503 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.671525 kubelet[2685]: W0813 00:45:34.671514 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.671579 kubelet[2685]: E0813 00:45:34.671527 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.672111 kubelet[2685]: E0813 00:45:34.672095 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.672111 kubelet[2685]: W0813 00:45:34.672108 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.672200 kubelet[2685]: E0813 00:45:34.672123 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.672955 kubelet[2685]: E0813 00:45:34.672482 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.672955 kubelet[2685]: W0813 00:45:34.672495 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.672955 kubelet[2685]: E0813 00:45:34.672511 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.672955 kubelet[2685]: E0813 00:45:34.672885 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.672955 kubelet[2685]: W0813 00:45:34.672928 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.672955 kubelet[2685]: E0813 00:45:34.672940 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.673421 kubelet[2685]: E0813 00:45:34.673401 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.673421 kubelet[2685]: W0813 00:45:34.673418 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.673526 kubelet[2685]: E0813 00:45:34.673433 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.673985 kubelet[2685]: E0813 00:45:34.673968 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.673985 kubelet[2685]: W0813 00:45:34.673981 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.674076 kubelet[2685]: E0813 00:45:34.673993 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.677849 containerd[1537]: time="2025-08-13T00:45:34.677798393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bsx7h,Uid:cd4dce00-90b8-43e1-8c5a-81618e231ca6,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:34.690295 kubelet[2685]: E0813 00:45:34.688765 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.690295 kubelet[2685]: W0813 00:45:34.688793 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.690295 kubelet[2685]: E0813 00:45:34.688819 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.690295 kubelet[2685]: I0813 00:45:34.688855 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d41150c4-c6b0-44d4-8583-33beadfff0f0-socket-dir\") pod \"csi-node-driver-8znj8\" (UID: \"d41150c4-c6b0-44d4-8583-33beadfff0f0\") " pod="calico-system/csi-node-driver-8znj8" Aug 13 00:45:34.690295 kubelet[2685]: E0813 00:45:34.689475 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.690295 kubelet[2685]: W0813 00:45:34.689498 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.690295 kubelet[2685]: E0813 00:45:34.689521 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.690295 kubelet[2685]: I0813 00:45:34.689562 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d41150c4-c6b0-44d4-8583-33beadfff0f0-registration-dir\") pod \"csi-node-driver-8znj8\" (UID: \"d41150c4-c6b0-44d4-8583-33beadfff0f0\") " pod="calico-system/csi-node-driver-8znj8" Aug 13 00:45:34.690295 kubelet[2685]: E0813 00:45:34.689805 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.690716 kubelet[2685]: W0813 00:45:34.689818 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.690716 kubelet[2685]: E0813 00:45:34.689831 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.690716 kubelet[2685]: I0813 00:45:34.689849 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d41150c4-c6b0-44d4-8583-33beadfff0f0-varrun\") pod \"csi-node-driver-8znj8\" (UID: \"d41150c4-c6b0-44d4-8583-33beadfff0f0\") " pod="calico-system/csi-node-driver-8znj8" Aug 13 00:45:34.690878 kubelet[2685]: E0813 00:45:34.690802 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.690878 kubelet[2685]: W0813 00:45:34.690822 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.690878 kubelet[2685]: E0813 00:45:34.690840 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.690878 kubelet[2685]: I0813 00:45:34.690867 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9htxm\" (UniqueName: \"kubernetes.io/projected/d41150c4-c6b0-44d4-8583-33beadfff0f0-kube-api-access-9htxm\") pod \"csi-node-driver-8znj8\" (UID: \"d41150c4-c6b0-44d4-8583-33beadfff0f0\") " pod="calico-system/csi-node-driver-8znj8" Aug 13 00:45:34.692439 kubelet[2685]: E0813 00:45:34.691112 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.692439 kubelet[2685]: W0813 00:45:34.691134 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.692439 kubelet[2685]: E0813 00:45:34.691150 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.692439 kubelet[2685]: I0813 00:45:34.691182 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d41150c4-c6b0-44d4-8583-33beadfff0f0-kubelet-dir\") pod \"csi-node-driver-8znj8\" (UID: \"d41150c4-c6b0-44d4-8583-33beadfff0f0\") " pod="calico-system/csi-node-driver-8znj8" Aug 13 00:45:34.692439 kubelet[2685]: E0813 00:45:34.691443 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.692439 kubelet[2685]: W0813 00:45:34.691459 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.692439 kubelet[2685]: E0813 00:45:34.691474 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.692439 kubelet[2685]: E0813 00:45:34.691684 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.692439 kubelet[2685]: W0813 00:45:34.691693 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.693560 kubelet[2685]: E0813 00:45:34.691707 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.693560 kubelet[2685]: E0813 00:45:34.691937 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.693560 kubelet[2685]: W0813 00:45:34.691945 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.693560 kubelet[2685]: E0813 00:45:34.691963 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.693560 kubelet[2685]: E0813 00:45:34.692116 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.693560 kubelet[2685]: W0813 00:45:34.692123 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.693560 kubelet[2685]: E0813 00:45:34.692136 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.693560 kubelet[2685]: E0813 00:45:34.692333 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.693560 kubelet[2685]: W0813 00:45:34.692343 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.693560 kubelet[2685]: E0813 00:45:34.692365 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.695629 kubelet[2685]: E0813 00:45:34.692519 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.695629 kubelet[2685]: W0813 00:45:34.692526 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.695629 kubelet[2685]: E0813 00:45:34.692539 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.695629 kubelet[2685]: E0813 00:45:34.693955 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.695629 kubelet[2685]: W0813 00:45:34.693974 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.695629 kubelet[2685]: E0813 00:45:34.694022 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.695629 kubelet[2685]: E0813 00:45:34.694217 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.695629 kubelet[2685]: W0813 00:45:34.694225 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.695629 kubelet[2685]: E0813 00:45:34.694274 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.695629 kubelet[2685]: E0813 00:45:34.694450 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.695890 kubelet[2685]: W0813 00:45:34.694458 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.695890 kubelet[2685]: E0813 00:45:34.694468 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.695890 kubelet[2685]: E0813 00:45:34.694896 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.695890 kubelet[2685]: W0813 00:45:34.694907 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.695890 kubelet[2685]: E0813 00:45:34.694918 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.733308 containerd[1537]: time="2025-08-13T00:45:34.733113310Z" level=info msg="connecting to shim 1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4" address="unix:///run/containerd/s/3b5cf5b1a3639af6c8b5d4f0d705cf8c54526994351ad5e5ea799fbd15496b18" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:34.789439 systemd[1]: Started cri-containerd-1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4.scope - libcontainer container 1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4. Aug 13 00:45:34.793519 kubelet[2685]: E0813 00:45:34.793480 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.793519 kubelet[2685]: W0813 00:45:34.793515 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.793809 kubelet[2685]: E0813 00:45:34.793564 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.794231 kubelet[2685]: E0813 00:45:34.794193 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.794384 kubelet[2685]: W0813 00:45:34.794210 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.794384 kubelet[2685]: E0813 00:45:34.794302 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.794655 kubelet[2685]: E0813 00:45:34.794626 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.794726 kubelet[2685]: W0813 00:45:34.794701 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.794873 kubelet[2685]: E0813 00:45:34.794827 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.795461 kubelet[2685]: E0813 00:45:34.795438 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.795552 kubelet[2685]: W0813 00:45:34.795462 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.795552 kubelet[2685]: E0813 00:45:34.795525 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.796433 kubelet[2685]: E0813 00:45:34.796025 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.796433 kubelet[2685]: W0813 00:45:34.796040 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.796433 kubelet[2685]: E0813 00:45:34.796057 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.796581 kubelet[2685]: E0813 00:45:34.796571 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.796611 kubelet[2685]: W0813 00:45:34.796586 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.796760 kubelet[2685]: E0813 00:45:34.796739 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.797037 kubelet[2685]: E0813 00:45:34.797015 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.797155 kubelet[2685]: W0813 00:45:34.797052 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.797155 kubelet[2685]: E0813 00:45:34.797095 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.797510 kubelet[2685]: E0813 00:45:34.797492 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.797686 kubelet[2685]: W0813 00:45:34.797629 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.797799 kubelet[2685]: E0813 00:45:34.797774 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.798171 kubelet[2685]: E0813 00:45:34.798122 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.798447 kubelet[2685]: W0813 00:45:34.798393 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.798536 kubelet[2685]: E0813 00:45:34.798518 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.798846 kubelet[2685]: E0813 00:45:34.798830 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.798888 kubelet[2685]: W0813 00:45:34.798847 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.799363 kubelet[2685]: E0813 00:45:34.799328 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.799613 kubelet[2685]: E0813 00:45:34.799593 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.799613 kubelet[2685]: W0813 00:45:34.799612 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.799799 kubelet[2685]: E0813 00:45:34.799692 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.800161 kubelet[2685]: E0813 00:45:34.800140 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.800161 kubelet[2685]: W0813 00:45:34.800158 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.800415 kubelet[2685]: E0813 00:45:34.800369 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.800832 kubelet[2685]: E0813 00:45:34.800810 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.800832 kubelet[2685]: W0813 00:45:34.800828 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.801133 kubelet[2685]: E0813 00:45:34.800990 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.801559 kubelet[2685]: E0813 00:45:34.801538 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.801559 kubelet[2685]: W0813 00:45:34.801557 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.801779 kubelet[2685]: E0813 00:45:34.801641 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.802407 kubelet[2685]: E0813 00:45:34.802386 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.802526 kubelet[2685]: W0813 00:45:34.802408 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.802526 kubelet[2685]: E0813 00:45:34.802491 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.803690 kubelet[2685]: E0813 00:45:34.803670 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.803813 kubelet[2685]: W0813 00:45:34.803691 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.803813 kubelet[2685]: E0813 00:45:34.803763 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.804025 kubelet[2685]: E0813 00:45:34.803941 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.804025 kubelet[2685]: W0813 00:45:34.803954 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.804025 kubelet[2685]: E0813 00:45:34.803995 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.804490 kubelet[2685]: E0813 00:45:34.804475 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.804530 kubelet[2685]: W0813 00:45:34.804492 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.804611 kubelet[2685]: E0813 00:45:34.804564 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.805851 kubelet[2685]: E0813 00:45:34.805786 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.805851 kubelet[2685]: W0813 00:45:34.805805 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.806373 kubelet[2685]: E0813 00:45:34.806188 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.806551 kubelet[2685]: E0813 00:45:34.806520 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.807026 kubelet[2685]: W0813 00:45:34.806858 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.807026 kubelet[2685]: E0813 00:45:34.806955 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.808359 kubelet[2685]: E0813 00:45:34.807642 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.808359 kubelet[2685]: W0813 00:45:34.807657 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.808359 kubelet[2685]: E0813 00:45:34.807691 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.808792 kubelet[2685]: E0813 00:45:34.808777 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.808977 kubelet[2685]: W0813 00:45:34.808871 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.808977 kubelet[2685]: E0813 00:45:34.808897 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.809548 kubelet[2685]: E0813 00:45:34.809457 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.809548 kubelet[2685]: W0813 00:45:34.809470 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.809548 kubelet[2685]: E0813 00:45:34.809494 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.810162 kubelet[2685]: E0813 00:45:34.810000 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.810162 kubelet[2685]: W0813 00:45:34.810018 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.810777 kubelet[2685]: E0813 00:45:34.810416 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.810987 kubelet[2685]: E0813 00:45:34.810970 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.811116 kubelet[2685]: W0813 00:45:34.811088 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.811210 kubelet[2685]: E0813 00:45:34.811193 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.828918 kubelet[2685]: E0813 00:45:34.828819 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:34.828918 kubelet[2685]: W0813 00:45:34.828849 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:34.828918 kubelet[2685]: E0813 00:45:34.828880 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:34.872100 containerd[1537]: time="2025-08-13T00:45:34.872026259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bsx7h,Uid:cd4dce00-90b8-43e1-8c5a-81618e231ca6,Namespace:calico-system,Attempt:0,} returns sandbox id \"1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4\"" Aug 13 00:45:36.088576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount38467674.mount: Deactivated successfully. Aug 13 00:45:36.361789 kubelet[2685]: E0813 00:45:36.361487 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8znj8" podUID="d41150c4-c6b0-44d4-8583-33beadfff0f0" Aug 13 00:45:37.698800 containerd[1537]: time="2025-08-13T00:45:37.698698226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:37.700171 containerd[1537]: time="2025-08-13T00:45:37.700106159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 00:45:37.700977 containerd[1537]: time="2025-08-13T00:45:37.700907867Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:37.702858 containerd[1537]: time="2025-08-13T00:45:37.702773957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:37.704373 containerd[1537]: time="2025-08-13T00:45:37.703603369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.237647629s" Aug 13 00:45:37.704373 containerd[1537]: time="2025-08-13T00:45:37.703656458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:45:37.706997 containerd[1537]: time="2025-08-13T00:45:37.706953312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:45:37.733295 containerd[1537]: time="2025-08-13T00:45:37.731463615Z" level=info msg="CreateContainer within sandbox \"2b697a3d5b4b6c944c8e017e42b03fae07543aa2499c94928027ee2e9333ede9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:45:37.747123 containerd[1537]: time="2025-08-13T00:45:37.743507693Z" level=info msg="Container e1d701e40bb4ba79f0865760d94d08d13862e0a7f31224718a4dad63a0b445f7: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:37.756482 containerd[1537]: time="2025-08-13T00:45:37.755717466Z" level=info msg="CreateContainer within sandbox \"2b697a3d5b4b6c944c8e017e42b03fae07543aa2499c94928027ee2e9333ede9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e1d701e40bb4ba79f0865760d94d08d13862e0a7f31224718a4dad63a0b445f7\"" Aug 13 00:45:37.756638 containerd[1537]: time="2025-08-13T00:45:37.756590653Z" level=info msg="StartContainer for \"e1d701e40bb4ba79f0865760d94d08d13862e0a7f31224718a4dad63a0b445f7\"" Aug 13 00:45:37.758861 containerd[1537]: time="2025-08-13T00:45:37.758774735Z" level=info msg="connecting to shim e1d701e40bb4ba79f0865760d94d08d13862e0a7f31224718a4dad63a0b445f7" address="unix:///run/containerd/s/716f332c0dea7cdf3fed0744af7ae586b72a8b0a6a9f9c0ca92e28c51123a4be" protocol=ttrpc version=3 Aug 13 00:45:37.809615 systemd[1]: Started cri-containerd-e1d701e40bb4ba79f0865760d94d08d13862e0a7f31224718a4dad63a0b445f7.scope - libcontainer container e1d701e40bb4ba79f0865760d94d08d13862e0a7f31224718a4dad63a0b445f7. Aug 13 00:45:37.901142 containerd[1537]: time="2025-08-13T00:45:37.901085532Z" level=info msg="StartContainer for \"e1d701e40bb4ba79f0865760d94d08d13862e0a7f31224718a4dad63a0b445f7\" returns successfully" Aug 13 00:45:38.350300 kubelet[2685]: E0813 00:45:38.349707 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8znj8" podUID="d41150c4-c6b0-44d4-8583-33beadfff0f0" Aug 13 00:45:38.476602 kubelet[2685]: E0813 00:45:38.476461 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:38.504688 kubelet[2685]: E0813 00:45:38.504641 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.504688 kubelet[2685]: W0813 00:45:38.504674 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.504688 kubelet[2685]: E0813 00:45:38.504704 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.505244 kubelet[2685]: E0813 00:45:38.504919 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.505244 kubelet[2685]: W0813 00:45:38.504927 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.505244 kubelet[2685]: E0813 00:45:38.504937 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.505244 kubelet[2685]: E0813 00:45:38.505119 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.505244 kubelet[2685]: W0813 00:45:38.505132 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.505244 kubelet[2685]: E0813 00:45:38.505146 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.505843 kubelet[2685]: E0813 00:45:38.505374 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.505843 kubelet[2685]: W0813 00:45:38.505382 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.505843 kubelet[2685]: E0813 00:45:38.505391 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.505843 kubelet[2685]: E0813 00:45:38.505540 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.505843 kubelet[2685]: W0813 00:45:38.505547 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.505843 kubelet[2685]: E0813 00:45:38.505554 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.505843 kubelet[2685]: E0813 00:45:38.505707 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.505843 kubelet[2685]: W0813 00:45:38.505721 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.505843 kubelet[2685]: E0813 00:45:38.505733 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.506895 kubelet[2685]: E0813 00:45:38.505918 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.506895 kubelet[2685]: W0813 00:45:38.505929 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.506895 kubelet[2685]: E0813 00:45:38.505940 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.506895 kubelet[2685]: E0813 00:45:38.506119 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.506895 kubelet[2685]: W0813 00:45:38.506128 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.506895 kubelet[2685]: E0813 00:45:38.506137 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.506895 kubelet[2685]: E0813 00:45:38.506410 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.506895 kubelet[2685]: W0813 00:45:38.506423 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.506895 kubelet[2685]: E0813 00:45:38.506436 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.506895 kubelet[2685]: E0813 00:45:38.506622 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.507471 kubelet[2685]: W0813 00:45:38.506631 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.507471 kubelet[2685]: E0813 00:45:38.506643 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.507471 kubelet[2685]: E0813 00:45:38.506809 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.507471 kubelet[2685]: W0813 00:45:38.506818 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.507471 kubelet[2685]: E0813 00:45:38.506827 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.507471 kubelet[2685]: E0813 00:45:38.507004 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.507471 kubelet[2685]: W0813 00:45:38.507013 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.507471 kubelet[2685]: E0813 00:45:38.507022 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.507471 kubelet[2685]: E0813 00:45:38.507208 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.507471 kubelet[2685]: W0813 00:45:38.507218 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.508145 kubelet[2685]: E0813 00:45:38.507228 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.508145 kubelet[2685]: E0813 00:45:38.507431 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.508145 kubelet[2685]: W0813 00:45:38.507440 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.508145 kubelet[2685]: E0813 00:45:38.507449 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.508145 kubelet[2685]: E0813 00:45:38.507605 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.508145 kubelet[2685]: W0813 00:45:38.507611 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.508145 kubelet[2685]: E0813 00:45:38.507618 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.532344 kubelet[2685]: E0813 00:45:38.532067 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.532344 kubelet[2685]: W0813 00:45:38.532110 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.532344 kubelet[2685]: E0813 00:45:38.532133 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.533059 kubelet[2685]: E0813 00:45:38.533040 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.533223 kubelet[2685]: W0813 00:45:38.533098 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.533223 kubelet[2685]: E0813 00:45:38.533129 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.533741 kubelet[2685]: E0813 00:45:38.533726 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.533932 kubelet[2685]: W0813 00:45:38.533822 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.533932 kubelet[2685]: E0813 00:45:38.533853 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.534348 kubelet[2685]: E0813 00:45:38.534321 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.534470 kubelet[2685]: W0813 00:45:38.534401 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.534470 kubelet[2685]: E0813 00:45:38.534426 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.534862 kubelet[2685]: E0813 00:45:38.534847 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.534990 kubelet[2685]: W0813 00:45:38.534935 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.535049 kubelet[2685]: E0813 00:45:38.535023 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.535653 kubelet[2685]: E0813 00:45:38.535475 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.535653 kubelet[2685]: W0813 00:45:38.535491 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.535653 kubelet[2685]: E0813 00:45:38.535592 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.536607 kubelet[2685]: E0813 00:45:38.536406 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.536607 kubelet[2685]: W0813 00:45:38.536442 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.536607 kubelet[2685]: E0813 00:45:38.536480 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.538110 kubelet[2685]: E0813 00:45:38.537911 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.538110 kubelet[2685]: W0813 00:45:38.537959 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.538110 kubelet[2685]: E0813 00:45:38.538013 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.538580 kubelet[2685]: E0813 00:45:38.538547 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.538580 kubelet[2685]: W0813 00:45:38.538561 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.538893 kubelet[2685]: E0813 00:45:38.538857 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.539296 kubelet[2685]: E0813 00:45:38.539214 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.539574 kubelet[2685]: W0813 00:45:38.539229 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.539574 kubelet[2685]: E0813 00:45:38.539496 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.539874 kubelet[2685]: E0813 00:45:38.539814 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.539874 kubelet[2685]: W0813 00:45:38.539827 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.539874 kubelet[2685]: E0813 00:45:38.539862 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.540532 kubelet[2685]: E0813 00:45:38.540433 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.540532 kubelet[2685]: W0813 00:45:38.540448 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.540532 kubelet[2685]: E0813 00:45:38.540467 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.541025 kubelet[2685]: E0813 00:45:38.540997 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.541182 kubelet[2685]: W0813 00:45:38.541107 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.541182 kubelet[2685]: E0813 00:45:38.541125 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.541589 kubelet[2685]: E0813 00:45:38.541552 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.541589 kubelet[2685]: W0813 00:45:38.541566 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.541919 kubelet[2685]: E0813 00:45:38.541804 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.542240 kubelet[2685]: E0813 00:45:38.542209 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.542475 kubelet[2685]: W0813 00:45:38.542222 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.542647 kubelet[2685]: E0813 00:45:38.542557 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.543204 kubelet[2685]: E0813 00:45:38.542928 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.543204 kubelet[2685]: W0813 00:45:38.542941 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.543204 kubelet[2685]: E0813 00:45:38.542959 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.544352 kubelet[2685]: E0813 00:45:38.544329 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.544352 kubelet[2685]: W0813 00:45:38.544348 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.544514 kubelet[2685]: E0813 00:45:38.544367 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:38.545142 kubelet[2685]: E0813 00:45:38.545115 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:38.545142 kubelet[2685]: W0813 00:45:38.545129 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:38.545142 kubelet[2685]: E0813 00:45:38.545141 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.481190 kubelet[2685]: I0813 00:45:39.480788 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:45:39.483207 kubelet[2685]: E0813 00:45:39.482330 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:39.514482 kubelet[2685]: E0813 00:45:39.514450 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.516204 kubelet[2685]: W0813 00:45:39.516004 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.517047 kubelet[2685]: E0813 00:45:39.516419 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.519837 kubelet[2685]: E0813 00:45:39.519555 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.519837 kubelet[2685]: W0813 00:45:39.519584 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.519837 kubelet[2685]: E0813 00:45:39.519612 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.520806 kubelet[2685]: E0813 00:45:39.520682 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.521093 kubelet[2685]: W0813 00:45:39.520921 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.521093 kubelet[2685]: E0813 00:45:39.520973 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.522872 kubelet[2685]: E0813 00:45:39.522849 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.523074 kubelet[2685]: W0813 00:45:39.523003 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.523074 kubelet[2685]: E0813 00:45:39.523032 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.525138 kubelet[2685]: E0813 00:45:39.525026 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.526089 kubelet[2685]: W0813 00:45:39.525465 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.526089 kubelet[2685]: E0813 00:45:39.525499 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.527545 kubelet[2685]: E0813 00:45:39.527517 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.528599 kubelet[2685]: W0813 00:45:39.527700 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.528599 kubelet[2685]: E0813 00:45:39.527732 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.530582 kubelet[2685]: E0813 00:45:39.530555 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.531117 kubelet[2685]: W0813 00:45:39.530812 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.531117 kubelet[2685]: E0813 00:45:39.530849 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.531959 kubelet[2685]: E0813 00:45:39.531835 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.532641 kubelet[2685]: W0813 00:45:39.531857 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.532641 kubelet[2685]: E0813 00:45:39.532334 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.533456 kubelet[2685]: E0813 00:45:39.533431 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.534086 kubelet[2685]: W0813 00:45:39.533661 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.534086 kubelet[2685]: E0813 00:45:39.533692 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.534950 kubelet[2685]: E0813 00:45:39.534818 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.534950 kubelet[2685]: W0813 00:45:39.534836 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.535322 kubelet[2685]: E0813 00:45:39.535155 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.536553 kubelet[2685]: E0813 00:45:39.536079 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.536553 kubelet[2685]: W0813 00:45:39.536127 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.536553 kubelet[2685]: E0813 00:45:39.536150 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.537335 kubelet[2685]: E0813 00:45:39.537196 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.537335 kubelet[2685]: W0813 00:45:39.537214 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.537880 kubelet[2685]: E0813 00:45:39.537233 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.538538 kubelet[2685]: E0813 00:45:39.538435 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.538538 kubelet[2685]: W0813 00:45:39.538453 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.539368 kubelet[2685]: E0813 00:45:39.538767 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.539804 kubelet[2685]: E0813 00:45:39.539665 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.539804 kubelet[2685]: W0813 00:45:39.539685 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.539804 kubelet[2685]: E0813 00:45:39.539700 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.542038 kubelet[2685]: E0813 00:45:39.541895 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.542038 kubelet[2685]: W0813 00:45:39.541916 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.542038 kubelet[2685]: E0813 00:45:39.541935 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.545855 kubelet[2685]: E0813 00:45:39.545811 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.546903 kubelet[2685]: W0813 00:45:39.546052 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.548539 kubelet[2685]: E0813 00:45:39.548325 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.550466 kubelet[2685]: E0813 00:45:39.550412 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.550466 kubelet[2685]: W0813 00:45:39.550463 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.550999 kubelet[2685]: E0813 00:45:39.550519 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.552011 kubelet[2685]: E0813 00:45:39.551788 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.552011 kubelet[2685]: W0813 00:45:39.551813 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.552011 kubelet[2685]: E0813 00:45:39.551841 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.552417 kubelet[2685]: E0813 00:45:39.552340 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.552417 kubelet[2685]: W0813 00:45:39.552358 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.552521 kubelet[2685]: E0813 00:45:39.552407 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.553153 kubelet[2685]: E0813 00:45:39.553078 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.553153 kubelet[2685]: W0813 00:45:39.553095 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.553153 kubelet[2685]: E0813 00:45:39.553137 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.553752 kubelet[2685]: E0813 00:45:39.553670 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.553752 kubelet[2685]: W0813 00:45:39.553688 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.553752 kubelet[2685]: E0813 00:45:39.553732 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.554506 kubelet[2685]: E0813 00:45:39.554403 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.554506 kubelet[2685]: W0813 00:45:39.554422 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.554878 kubelet[2685]: E0813 00:45:39.554519 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.555600 kubelet[2685]: E0813 00:45:39.555422 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.555600 kubelet[2685]: W0813 00:45:39.555440 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.555600 kubelet[2685]: E0813 00:45:39.555531 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.557597 kubelet[2685]: E0813 00:45:39.557380 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.557597 kubelet[2685]: W0813 00:45:39.557406 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.557597 kubelet[2685]: E0813 00:45:39.557461 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.559824 kubelet[2685]: E0813 00:45:39.558971 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.559824 kubelet[2685]: W0813 00:45:39.558998 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.559824 kubelet[2685]: E0813 00:45:39.559047 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.560115 kubelet[2685]: E0813 00:45:39.560090 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.560576 kubelet[2685]: W0813 00:45:39.560552 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.561238 kubelet[2685]: E0813 00:45:39.561054 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.562545 kubelet[2685]: E0813 00:45:39.562520 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.562923 kubelet[2685]: W0813 00:45:39.562671 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.563930 kubelet[2685]: E0813 00:45:39.563765 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.563930 kubelet[2685]: W0813 00:45:39.563792 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.564839 kubelet[2685]: E0813 00:45:39.564775 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.564839 kubelet[2685]: E0813 00:45:39.564836 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.565954 kubelet[2685]: E0813 00:45:39.565153 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.565954 kubelet[2685]: W0813 00:45:39.565909 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.566560 kubelet[2685]: E0813 00:45:39.566303 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.567311 kubelet[2685]: E0813 00:45:39.567138 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.567861 kubelet[2685]: W0813 00:45:39.567831 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.568083 kubelet[2685]: E0813 00:45:39.568062 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.571101 kubelet[2685]: E0813 00:45:39.571044 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.571883 kubelet[2685]: W0813 00:45:39.571292 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.571883 kubelet[2685]: E0813 00:45:39.571463 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.574116 kubelet[2685]: E0813 00:45:39.574081 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.574337 kubelet[2685]: W0813 00:45:39.574310 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.574476 kubelet[2685]: E0813 00:45:39.574455 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.585209 kubelet[2685]: E0813 00:45:39.585159 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:45:39.585209 kubelet[2685]: W0813 00:45:39.585193 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:45:39.585464 kubelet[2685]: E0813 00:45:39.585236 2685 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:45:39.692338 containerd[1537]: time="2025-08-13T00:45:39.691680895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:39.693548 containerd[1537]: time="2025-08-13T00:45:39.693503188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 00:45:39.693878 containerd[1537]: time="2025-08-13T00:45:39.693849033Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:39.697284 containerd[1537]: time="2025-08-13T00:45:39.696386384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:39.697603 containerd[1537]: time="2025-08-13T00:45:39.697560036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.990119888s" Aug 13 00:45:39.697737 containerd[1537]: time="2025-08-13T00:45:39.697718393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:45:39.704500 containerd[1537]: time="2025-08-13T00:45:39.704346452Z" level=info msg="CreateContainer within sandbox \"1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:45:39.720787 containerd[1537]: time="2025-08-13T00:45:39.720723460Z" level=info msg="Container eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:39.734284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066370615.mount: Deactivated successfully. Aug 13 00:45:39.761565 containerd[1537]: time="2025-08-13T00:45:39.761212202Z" level=info msg="CreateContainer within sandbox \"1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252\"" Aug 13 00:45:39.765390 containerd[1537]: time="2025-08-13T00:45:39.763181202Z" level=info msg="StartContainer for \"eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252\"" Aug 13 00:45:39.766849 containerd[1537]: time="2025-08-13T00:45:39.766767896Z" level=info msg="connecting to shim eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252" address="unix:///run/containerd/s/3b5cf5b1a3639af6c8b5d4f0d705cf8c54526994351ad5e5ea799fbd15496b18" protocol=ttrpc version=3 Aug 13 00:45:39.800553 systemd[1]: Started cri-containerd-eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252.scope - libcontainer container eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252. Aug 13 00:45:39.878033 containerd[1537]: time="2025-08-13T00:45:39.877757419Z" level=info msg="StartContainer for \"eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252\" returns successfully" Aug 13 00:45:39.898154 systemd[1]: cri-containerd-eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252.scope: Deactivated successfully. Aug 13 00:45:39.951881 containerd[1537]: time="2025-08-13T00:45:39.951176487Z" level=info msg="received exit event container_id:\"eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252\" id:\"eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252\" pid:3399 exited_at:{seconds:1755045939 nanos:900785936}" Aug 13 00:45:39.954397 containerd[1537]: time="2025-08-13T00:45:39.954301514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252\" id:\"eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252\" pid:3399 exited_at:{seconds:1755045939 nanos:900785936}" Aug 13 00:45:39.991376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eece8d04a20943d2601137d1391d64883420533a5e6758adfb1ceffb5865e252-rootfs.mount: Deactivated successfully. Aug 13 00:45:40.354115 kubelet[2685]: E0813 00:45:40.353981 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8znj8" podUID="d41150c4-c6b0-44d4-8583-33beadfff0f0" Aug 13 00:45:40.490640 containerd[1537]: time="2025-08-13T00:45:40.490589784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:45:40.512083 kubelet[2685]: I0813 00:45:40.511632 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76c948bf55-jqw6h" podStartSLOduration=4.272123023 podStartE2EDuration="7.511611927s" podCreationTimestamp="2025-08-13 00:45:33 +0000 UTC" firstStartedPulling="2025-08-13 00:45:34.465430526 +0000 UTC m=+22.302586691" lastFinishedPulling="2025-08-13 00:45:37.704919436 +0000 UTC m=+25.542075595" observedRunningTime="2025-08-13 00:45:38.491918539 +0000 UTC m=+26.329074705" watchObservedRunningTime="2025-08-13 00:45:40.511611927 +0000 UTC m=+28.348768092" Aug 13 00:45:42.350187 kubelet[2685]: E0813 00:45:42.350068 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8znj8" podUID="d41150c4-c6b0-44d4-8583-33beadfff0f0" Aug 13 00:45:44.350775 kubelet[2685]: E0813 00:45:44.350707 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8znj8" podUID="d41150c4-c6b0-44d4-8583-33beadfff0f0" Aug 13 00:45:44.986222 containerd[1537]: time="2025-08-13T00:45:44.986164068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:44.988945 containerd[1537]: time="2025-08-13T00:45:44.988866875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 00:45:44.999946 containerd[1537]: time="2025-08-13T00:45:44.999270631Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:45.003418 containerd[1537]: time="2025-08-13T00:45:45.003352019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:45.004622 containerd[1537]: time="2025-08-13T00:45:45.004584313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.513944093s" Aug 13 00:45:45.004867 containerd[1537]: time="2025-08-13T00:45:45.004842743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:45:45.010841 containerd[1537]: time="2025-08-13T00:45:45.010700171Z" level=info msg="CreateContainer within sandbox \"1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:45:45.020288 containerd[1537]: time="2025-08-13T00:45:45.020183781Z" level=info msg="Container d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:45.026887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3038069093.mount: Deactivated successfully. Aug 13 00:45:45.050509 containerd[1537]: time="2025-08-13T00:45:45.050446831Z" level=info msg="CreateContainer within sandbox \"1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681\"" Aug 13 00:45:45.051801 containerd[1537]: time="2025-08-13T00:45:45.051674709Z" level=info msg="StartContainer for \"d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681\"" Aug 13 00:45:45.053568 containerd[1537]: time="2025-08-13T00:45:45.053516960Z" level=info msg="connecting to shim d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681" address="unix:///run/containerd/s/3b5cf5b1a3639af6c8b5d4f0d705cf8c54526994351ad5e5ea799fbd15496b18" protocol=ttrpc version=3 Aug 13 00:45:45.083508 systemd[1]: Started cri-containerd-d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681.scope - libcontainer container d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681. Aug 13 00:45:45.139779 containerd[1537]: time="2025-08-13T00:45:45.139704769Z" level=info msg="StartContainer for \"d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681\" returns successfully" Aug 13 00:45:45.760906 systemd[1]: cri-containerd-d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681.scope: Deactivated successfully. Aug 13 00:45:45.761371 systemd[1]: cri-containerd-d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681.scope: Consumed 641ms CPU time, 162.4M memory peak, 9.3M read from disk, 171.2M written to disk. Aug 13 00:45:45.766768 containerd[1537]: time="2025-08-13T00:45:45.766046216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681\" id:\"d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681\" pid:3455 exited_at:{seconds:1755045945 nanos:765044825}" Aug 13 00:45:45.769151 containerd[1537]: time="2025-08-13T00:45:45.768881233Z" level=info msg="received exit event container_id:\"d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681\" id:\"d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681\" pid:3455 exited_at:{seconds:1755045945 nanos:765044825}" Aug 13 00:45:45.817320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d983307883fbe03d368b834b1689aa042e5171b0114fb986d7d2a976352f9681-rootfs.mount: Deactivated successfully. Aug 13 00:45:45.846509 kubelet[2685]: I0813 00:45:45.846157 2685 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:45:45.906741 systemd[1]: Created slice kubepods-burstable-pod74bc61f3_1e11_436c_9b6d_9e59c4b52a34.slice - libcontainer container kubepods-burstable-pod74bc61f3_1e11_436c_9b6d_9e59c4b52a34.slice. Aug 13 00:45:45.929554 systemd[1]: Created slice kubepods-besteffort-pod18edc429_093e_48c8_bc6b_23bf9a11dc79.slice - libcontainer container kubepods-besteffort-pod18edc429_093e_48c8_bc6b_23bf9a11dc79.slice. Aug 13 00:45:45.941895 systemd[1]: Created slice kubepods-burstable-podb9850657_9169_4146_bf71_7d92d2abdff2.slice - libcontainer container kubepods-burstable-podb9850657_9169_4146_bf71_7d92d2abdff2.slice. Aug 13 00:45:45.953435 systemd[1]: Created slice kubepods-besteffort-pod4e765ba1_8ab7_49d2_8d08_fb6e10db0a9f.slice - libcontainer container kubepods-besteffort-pod4e765ba1_8ab7_49d2_8d08_fb6e10db0a9f.slice. Aug 13 00:45:45.968494 systemd[1]: Created slice kubepods-besteffort-pod2e97d7c6_fa57_42b5_919c_b271d50e0035.slice - libcontainer container kubepods-besteffort-pod2e97d7c6_fa57_42b5_919c_b271d50e0035.slice. Aug 13 00:45:45.984342 systemd[1]: Created slice kubepods-besteffort-pod148640d4_e09f_4aaf_9913_6b9ccebe9ef7.slice - libcontainer container kubepods-besteffort-pod148640d4_e09f_4aaf_9913_6b9ccebe9ef7.slice. Aug 13 00:45:46.005282 kubelet[2685]: I0813 00:45:46.003550 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzq4l\" (UniqueName: \"kubernetes.io/projected/2e97d7c6-fa57-42b5-919c-b271d50e0035-kube-api-access-bzq4l\") pod \"calico-apiserver-6c7db96bd9-hlxr5\" (UID: \"2e97d7c6-fa57-42b5-919c-b271d50e0035\") " pod="calico-apiserver/calico-apiserver-6c7db96bd9-hlxr5" Aug 13 00:45:46.005282 kubelet[2685]: I0813 00:45:46.003621 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9hz\" (UniqueName: \"kubernetes.io/projected/148640d4-e09f-4aaf-9913-6b9ccebe9ef7-kube-api-access-tr9hz\") pod \"calico-apiserver-6c7db96bd9-xrmsw\" (UID: \"148640d4-e09f-4aaf-9913-6b9ccebe9ef7\") " pod="calico-apiserver/calico-apiserver-6c7db96bd9-xrmsw" Aug 13 00:45:46.005282 kubelet[2685]: I0813 00:45:46.003657 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9l87\" (UniqueName: \"kubernetes.io/projected/10c16ae2-09ac-4ae3-871f-5816390e1303-kube-api-access-v9l87\") pod \"whisker-844765f98c-bg6d4\" (UID: \"10c16ae2-09ac-4ae3-871f-5816390e1303\") " pod="calico-system/whisker-844765f98c-bg6d4" Aug 13 00:45:46.005282 kubelet[2685]: I0813 00:45:46.003688 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74bc61f3-1e11-436c-9b6d-9e59c4b52a34-config-volume\") pod \"coredns-668d6bf9bc-dnwrn\" (UID: \"74bc61f3-1e11-436c-9b6d-9e59c4b52a34\") " pod="kube-system/coredns-668d6bf9bc-dnwrn" Aug 13 00:45:46.005282 kubelet[2685]: I0813 00:45:46.003722 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv4r2\" (UniqueName: \"kubernetes.io/projected/74bc61f3-1e11-436c-9b6d-9e59c4b52a34-kube-api-access-sv4r2\") pod \"coredns-668d6bf9bc-dnwrn\" (UID: \"74bc61f3-1e11-436c-9b6d-9e59c4b52a34\") " pod="kube-system/coredns-668d6bf9bc-dnwrn" Aug 13 00:45:46.005697 kubelet[2685]: I0813 00:45:46.003783 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgxc\" (UniqueName: \"kubernetes.io/projected/18edc429-093e-48c8-bc6b-23bf9a11dc79-kube-api-access-4cgxc\") pod \"calico-kube-controllers-755586b654-x25tm\" (UID: \"18edc429-093e-48c8-bc6b-23bf9a11dc79\") " pod="calico-system/calico-kube-controllers-755586b654-x25tm" Aug 13 00:45:46.005697 kubelet[2685]: I0813 00:45:46.003829 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1510219f-b9b7-435f-8d1d-362a5346c27a-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-l9hj6\" (UID: \"1510219f-b9b7-435f-8d1d-362a5346c27a\") " pod="calico-system/goldmane-768f4c5c69-l9hj6" Aug 13 00:45:46.005697 kubelet[2685]: I0813 00:45:46.003905 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtpvb\" (UniqueName: \"kubernetes.io/projected/1510219f-b9b7-435f-8d1d-362a5346c27a-kube-api-access-vtpvb\") pod \"goldmane-768f4c5c69-l9hj6\" (UID: \"1510219f-b9b7-435f-8d1d-362a5346c27a\") " pod="calico-system/goldmane-768f4c5c69-l9hj6" Aug 13 00:45:46.005697 kubelet[2685]: I0813 00:45:46.003937 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f-calico-apiserver-certs\") pod \"calico-apiserver-85c585b668-8bc59\" (UID: \"4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f\") " pod="calico-apiserver/calico-apiserver-85c585b668-8bc59" Aug 13 00:45:46.005697 kubelet[2685]: I0813 00:45:46.004028 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10c16ae2-09ac-4ae3-871f-5816390e1303-whisker-backend-key-pair\") pod \"whisker-844765f98c-bg6d4\" (UID: \"10c16ae2-09ac-4ae3-871f-5816390e1303\") " pod="calico-system/whisker-844765f98c-bg6d4" Aug 13 00:45:46.005312 systemd[1]: Created slice kubepods-besteffort-pod10c16ae2_09ac_4ae3_871f_5816390e1303.slice - libcontainer container kubepods-besteffort-pod10c16ae2_09ac_4ae3_871f_5816390e1303.slice. Aug 13 00:45:46.006027 kubelet[2685]: I0813 00:45:46.004074 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c16ae2-09ac-4ae3-871f-5816390e1303-whisker-ca-bundle\") pod \"whisker-844765f98c-bg6d4\" (UID: \"10c16ae2-09ac-4ae3-871f-5816390e1303\") " pod="calico-system/whisker-844765f98c-bg6d4" Aug 13 00:45:46.007746 kubelet[2685]: I0813 00:45:46.007691 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjkv\" (UniqueName: \"kubernetes.io/projected/b9850657-9169-4146-bf71-7d92d2abdff2-kube-api-access-kwjkv\") pod \"coredns-668d6bf9bc-qks26\" (UID: \"b9850657-9169-4146-bf71-7d92d2abdff2\") " pod="kube-system/coredns-668d6bf9bc-qks26" Aug 13 00:45:46.011388 kubelet[2685]: I0813 00:45:46.010660 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rjdg\" (UniqueName: \"kubernetes.io/projected/4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f-kube-api-access-8rjdg\") pod \"calico-apiserver-85c585b668-8bc59\" (UID: \"4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f\") " pod="calico-apiserver/calico-apiserver-85c585b668-8bc59" Aug 13 00:45:46.012759 kubelet[2685]: I0813 00:45:46.012724 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9850657-9169-4146-bf71-7d92d2abdff2-config-volume\") pod \"coredns-668d6bf9bc-qks26\" (UID: \"b9850657-9169-4146-bf71-7d92d2abdff2\") " pod="kube-system/coredns-668d6bf9bc-qks26" Aug 13 00:45:46.012974 kubelet[2685]: I0813 00:45:46.012937 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1510219f-b9b7-435f-8d1d-362a5346c27a-goldmane-key-pair\") pod \"goldmane-768f4c5c69-l9hj6\" (UID: \"1510219f-b9b7-435f-8d1d-362a5346c27a\") " pod="calico-system/goldmane-768f4c5c69-l9hj6" Aug 13 00:45:46.017680 kubelet[2685]: I0813 00:45:46.017616 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18edc429-093e-48c8-bc6b-23bf9a11dc79-tigera-ca-bundle\") pod \"calico-kube-controllers-755586b654-x25tm\" (UID: \"18edc429-093e-48c8-bc6b-23bf9a11dc79\") " pod="calico-system/calico-kube-controllers-755586b654-x25tm" Aug 13 00:45:46.018173 kubelet[2685]: I0813 00:45:46.018129 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e97d7c6-fa57-42b5-919c-b271d50e0035-calico-apiserver-certs\") pod \"calico-apiserver-6c7db96bd9-hlxr5\" (UID: \"2e97d7c6-fa57-42b5-919c-b271d50e0035\") " pod="calico-apiserver/calico-apiserver-6c7db96bd9-hlxr5" Aug 13 00:45:46.023917 kubelet[2685]: I0813 00:45:46.022568 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/148640d4-e09f-4aaf-9913-6b9ccebe9ef7-calico-apiserver-certs\") pod \"calico-apiserver-6c7db96bd9-xrmsw\" (UID: \"148640d4-e09f-4aaf-9913-6b9ccebe9ef7\") " pod="calico-apiserver/calico-apiserver-6c7db96bd9-xrmsw" Aug 13 00:45:46.023917 kubelet[2685]: I0813 00:45:46.023146 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1510219f-b9b7-435f-8d1d-362a5346c27a-config\") pod \"goldmane-768f4c5c69-l9hj6\" (UID: \"1510219f-b9b7-435f-8d1d-362a5346c27a\") " pod="calico-system/goldmane-768f4c5c69-l9hj6" Aug 13 00:45:46.029910 systemd[1]: Created slice kubepods-besteffort-pod1510219f_b9b7_435f_8d1d_362a5346c27a.slice - libcontainer container kubepods-besteffort-pod1510219f_b9b7_435f_8d1d_362a5346c27a.slice. Aug 13 00:45:46.224302 kubelet[2685]: E0813 00:45:46.221564 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:46.231068 containerd[1537]: time="2025-08-13T00:45:46.231021954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnwrn,Uid:74bc61f3-1e11-436c-9b6d-9e59c4b52a34,Namespace:kube-system,Attempt:0,}" Aug 13 00:45:46.243320 containerd[1537]: time="2025-08-13T00:45:46.243195517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755586b654-x25tm,Uid:18edc429-093e-48c8-bc6b-23bf9a11dc79,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:46.248066 kubelet[2685]: E0813 00:45:46.247886 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:46.249473 containerd[1537]: time="2025-08-13T00:45:46.249431315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qks26,Uid:b9850657-9169-4146-bf71-7d92d2abdff2,Namespace:kube-system,Attempt:0,}" Aug 13 00:45:46.268128 containerd[1537]: time="2025-08-13T00:45:46.265620576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c585b668-8bc59,Uid:4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:45:46.281786 containerd[1537]: time="2025-08-13T00:45:46.281732511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db96bd9-hlxr5,Uid:2e97d7c6-fa57-42b5-919c-b271d50e0035,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:45:46.300200 containerd[1537]: time="2025-08-13T00:45:46.300121208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db96bd9-xrmsw,Uid:148640d4-e09f-4aaf-9913-6b9ccebe9ef7,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:45:46.325013 containerd[1537]: time="2025-08-13T00:45:46.324937696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-844765f98c-bg6d4,Uid:10c16ae2-09ac-4ae3-871f-5816390e1303,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:46.341570 containerd[1537]: time="2025-08-13T00:45:46.341365192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-l9hj6,Uid:1510219f-b9b7-435f-8d1d-362a5346c27a,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:46.368651 systemd[1]: Created slice kubepods-besteffort-podd41150c4_c6b0_44d4_8583_33beadfff0f0.slice - libcontainer container kubepods-besteffort-podd41150c4_c6b0_44d4_8583_33beadfff0f0.slice. Aug 13 00:45:46.377279 containerd[1537]: time="2025-08-13T00:45:46.377183790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8znj8,Uid:d41150c4-c6b0-44d4-8583-33beadfff0f0,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:46.595903 containerd[1537]: time="2025-08-13T00:45:46.595856391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:45:46.758897 containerd[1537]: time="2025-08-13T00:45:46.758616263Z" level=error msg="Failed to destroy network for sandbox \"d95d0575845091c757bb6c0abd32b0be3863684d8e2621a7702de90653afab07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.808475 containerd[1537]: time="2025-08-13T00:45:46.787495469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db96bd9-hlxr5,Uid:2e97d7c6-fa57-42b5-919c-b271d50e0035,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95d0575845091c757bb6c0abd32b0be3863684d8e2621a7702de90653afab07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.808881 containerd[1537]: time="2025-08-13T00:45:46.790962003Z" level=error msg="Failed to destroy network for sandbox \"230f0024f0ae214a706f90260a39eb580cccf0a646db8b749c0d9649a1954cbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.809893 containerd[1537]: time="2025-08-13T00:45:46.809849749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-844765f98c-bg6d4,Uid:10c16ae2-09ac-4ae3-871f-5816390e1303,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"230f0024f0ae214a706f90260a39eb580cccf0a646db8b749c0d9649a1954cbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.810120 containerd[1537]: time="2025-08-13T00:45:46.793459575Z" level=error msg="Failed to destroy network for sandbox \"82a976d9aac6c757e1be8b67692ebef3b7f8a3d2ee3f3404a368596a18e23e45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.811986 containerd[1537]: time="2025-08-13T00:45:46.801879329Z" level=error msg="Failed to destroy network for sandbox \"a7d3333bab8e58c6c6e29eae1b0fc1886d53475c34e6c2be244f751b81962175\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.811986 containerd[1537]: time="2025-08-13T00:45:46.803155033Z" level=error msg="Failed to destroy network for sandbox \"53b689bcfd8486c9483e8c547a13240c5923e1ab0958075eed4e6b3086c55a8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.812129 kubelet[2685]: E0813 00:45:46.810658 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230f0024f0ae214a706f90260a39eb580cccf0a646db8b749c0d9649a1954cbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.812129 kubelet[2685]: E0813 00:45:46.810748 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230f0024f0ae214a706f90260a39eb580cccf0a646db8b749c0d9649a1954cbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-844765f98c-bg6d4" Aug 13 00:45:46.812129 kubelet[2685]: E0813 00:45:46.810774 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"230f0024f0ae214a706f90260a39eb580cccf0a646db8b749c0d9649a1954cbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-844765f98c-bg6d4" Aug 13 00:45:46.812345 containerd[1537]: time="2025-08-13T00:45:46.812077842Z" level=error msg="Failed to destroy network for sandbox \"43de23e76116bbd31eecadd68482cff81af2dc821f7e6f120fc098fe6573021a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.812377 kubelet[2685]: E0813 00:45:46.810816 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-844765f98c-bg6d4_calico-system(10c16ae2-09ac-4ae3-871f-5816390e1303)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-844765f98c-bg6d4_calico-system(10c16ae2-09ac-4ae3-871f-5816390e1303)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"230f0024f0ae214a706f90260a39eb580cccf0a646db8b749c0d9649a1954cbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-844765f98c-bg6d4" podUID="10c16ae2-09ac-4ae3-871f-5816390e1303" Aug 13 00:45:46.812377 kubelet[2685]: E0813 00:45:46.811160 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95d0575845091c757bb6c0abd32b0be3863684d8e2621a7702de90653afab07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.812377 kubelet[2685]: E0813 00:45:46.811222 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95d0575845091c757bb6c0abd32b0be3863684d8e2621a7702de90653afab07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7db96bd9-hlxr5" Aug 13 00:45:46.812519 kubelet[2685]: E0813 00:45:46.811647 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d95d0575845091c757bb6c0abd32b0be3863684d8e2621a7702de90653afab07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7db96bd9-hlxr5" Aug 13 00:45:46.812519 kubelet[2685]: E0813 00:45:46.811737 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7db96bd9-hlxr5_calico-apiserver(2e97d7c6-fa57-42b5-919c-b271d50e0035)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7db96bd9-hlxr5_calico-apiserver(2e97d7c6-fa57-42b5-919c-b271d50e0035)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d95d0575845091c757bb6c0abd32b0be3863684d8e2621a7702de90653afab07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7db96bd9-hlxr5" podUID="2e97d7c6-fa57-42b5-919c-b271d50e0035" Aug 13 00:45:46.814752 containerd[1537]: time="2025-08-13T00:45:46.814702299Z" level=error msg="Failed to destroy network for sandbox \"2c34f6a455dbfb77c40673d57cdfd39a57e8c2b26356906e46d08fe3c2070dee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.815320 containerd[1537]: time="2025-08-13T00:45:46.815292597Z" level=error msg="Failed to destroy network for sandbox \"ad69f557e9a904c97a449150609e2afee6012914fd167112ecbc9be91d05cde8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.815773 containerd[1537]: time="2025-08-13T00:45:46.815741668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8znj8,Uid:d41150c4-c6b0-44d4-8583-33beadfff0f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a976d9aac6c757e1be8b67692ebef3b7f8a3d2ee3f3404a368596a18e23e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.817549 kubelet[2685]: E0813 00:45:46.817226 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a976d9aac6c757e1be8b67692ebef3b7f8a3d2ee3f3404a368596a18e23e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.817549 kubelet[2685]: E0813 00:45:46.817370 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a976d9aac6c757e1be8b67692ebef3b7f8a3d2ee3f3404a368596a18e23e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8znj8" Aug 13 00:45:46.817549 kubelet[2685]: E0813 00:45:46.817402 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82a976d9aac6c757e1be8b67692ebef3b7f8a3d2ee3f3404a368596a18e23e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8znj8" Aug 13 00:45:46.818465 kubelet[2685]: E0813 00:45:46.817531 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8znj8_calico-system(d41150c4-c6b0-44d4-8583-33beadfff0f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8znj8_calico-system(d41150c4-c6b0-44d4-8583-33beadfff0f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82a976d9aac6c757e1be8b67692ebef3b7f8a3d2ee3f3404a368596a18e23e45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8znj8" podUID="d41150c4-c6b0-44d4-8583-33beadfff0f0" Aug 13 00:45:46.819129 containerd[1537]: time="2025-08-13T00:45:46.819076685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnwrn,Uid:74bc61f3-1e11-436c-9b6d-9e59c4b52a34,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d3333bab8e58c6c6e29eae1b0fc1886d53475c34e6c2be244f751b81962175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.819695 containerd[1537]: time="2025-08-13T00:45:46.819600375Z" level=error msg="Failed to destroy network for sandbox \"8812ee9f8f11004133e6648b2b8bb017a71afc9fd7cdd3ff489df35a8bd6394e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.820010 kubelet[2685]: E0813 00:45:46.819954 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d3333bab8e58c6c6e29eae1b0fc1886d53475c34e6c2be244f751b81962175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.820218 kubelet[2685]: E0813 00:45:46.820014 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d3333bab8e58c6c6e29eae1b0fc1886d53475c34e6c2be244f751b81962175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dnwrn" Aug 13 00:45:46.820218 kubelet[2685]: E0813 00:45:46.820035 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7d3333bab8e58c6c6e29eae1b0fc1886d53475c34e6c2be244f751b81962175\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dnwrn" Aug 13 00:45:46.820218 kubelet[2685]: E0813 00:45:46.820082 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dnwrn_kube-system(74bc61f3-1e11-436c-9b6d-9e59c4b52a34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dnwrn_kube-system(74bc61f3-1e11-436c-9b6d-9e59c4b52a34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7d3333bab8e58c6c6e29eae1b0fc1886d53475c34e6c2be244f751b81962175\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dnwrn" podUID="74bc61f3-1e11-436c-9b6d-9e59c4b52a34" Aug 13 00:45:46.821794 containerd[1537]: time="2025-08-13T00:45:46.820638616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755586b654-x25tm,Uid:18edc429-093e-48c8-bc6b-23bf9a11dc79,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b689bcfd8486c9483e8c547a13240c5923e1ab0958075eed4e6b3086c55a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.821940 kubelet[2685]: E0813 00:45:46.821575 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b689bcfd8486c9483e8c547a13240c5923e1ab0958075eed4e6b3086c55a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.821940 kubelet[2685]: E0813 00:45:46.821662 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b689bcfd8486c9483e8c547a13240c5923e1ab0958075eed4e6b3086c55a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-755586b654-x25tm" Aug 13 00:45:46.821940 kubelet[2685]: E0813 00:45:46.821686 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b689bcfd8486c9483e8c547a13240c5923e1ab0958075eed4e6b3086c55a8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-755586b654-x25tm" Aug 13 00:45:46.822357 kubelet[2685]: E0813 00:45:46.821739 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-755586b654-x25tm_calico-system(18edc429-093e-48c8-bc6b-23bf9a11dc79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-755586b654-x25tm_calico-system(18edc429-093e-48c8-bc6b-23bf9a11dc79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53b689bcfd8486c9483e8c547a13240c5923e1ab0958075eed4e6b3086c55a8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-755586b654-x25tm" podUID="18edc429-093e-48c8-bc6b-23bf9a11dc79" Aug 13 00:45:46.822774 containerd[1537]: time="2025-08-13T00:45:46.822647848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db96bd9-xrmsw,Uid:148640d4-e09f-4aaf-9913-6b9ccebe9ef7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43de23e76116bbd31eecadd68482cff81af2dc821f7e6f120fc098fe6573021a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.823413 kubelet[2685]: E0813 00:45:46.823357 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43de23e76116bbd31eecadd68482cff81af2dc821f7e6f120fc098fe6573021a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.823489 kubelet[2685]: E0813 00:45:46.823455 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43de23e76116bbd31eecadd68482cff81af2dc821f7e6f120fc098fe6573021a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7db96bd9-xrmsw" Aug 13 00:45:46.823524 kubelet[2685]: E0813 00:45:46.823506 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43de23e76116bbd31eecadd68482cff81af2dc821f7e6f120fc098fe6573021a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7db96bd9-xrmsw" Aug 13 00:45:46.823649 containerd[1537]: time="2025-08-13T00:45:46.823614274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qks26,Uid:b9850657-9169-4146-bf71-7d92d2abdff2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad69f557e9a904c97a449150609e2afee6012914fd167112ecbc9be91d05cde8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.824195 kubelet[2685]: E0813 00:45:46.823551 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7db96bd9-xrmsw_calico-apiserver(148640d4-e09f-4aaf-9913-6b9ccebe9ef7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7db96bd9-xrmsw_calico-apiserver(148640d4-e09f-4aaf-9913-6b9ccebe9ef7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43de23e76116bbd31eecadd68482cff81af2dc821f7e6f120fc098fe6573021a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7db96bd9-xrmsw" podUID="148640d4-e09f-4aaf-9913-6b9ccebe9ef7" Aug 13 00:45:46.824195 kubelet[2685]: E0813 00:45:46.824001 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad69f557e9a904c97a449150609e2afee6012914fd167112ecbc9be91d05cde8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.824195 kubelet[2685]: E0813 00:45:46.824032 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad69f557e9a904c97a449150609e2afee6012914fd167112ecbc9be91d05cde8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qks26" Aug 13 00:45:46.824577 kubelet[2685]: E0813 00:45:46.824053 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad69f557e9a904c97a449150609e2afee6012914fd167112ecbc9be91d05cde8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qks26" Aug 13 00:45:46.824577 kubelet[2685]: E0813 00:45:46.824098 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qks26_kube-system(b9850657-9169-4146-bf71-7d92d2abdff2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qks26_kube-system(b9850657-9169-4146-bf71-7d92d2abdff2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad69f557e9a904c97a449150609e2afee6012914fd167112ecbc9be91d05cde8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qks26" podUID="b9850657-9169-4146-bf71-7d92d2abdff2" Aug 13 00:45:46.825323 containerd[1537]: time="2025-08-13T00:45:46.825118287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c585b668-8bc59,Uid:4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c34f6a455dbfb77c40673d57cdfd39a57e8c2b26356906e46d08fe3c2070dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.826277 kubelet[2685]: E0813 00:45:46.826076 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c34f6a455dbfb77c40673d57cdfd39a57e8c2b26356906e46d08fe3c2070dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.826277 kubelet[2685]: E0813 00:45:46.826145 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c34f6a455dbfb77c40673d57cdfd39a57e8c2b26356906e46d08fe3c2070dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85c585b668-8bc59" Aug 13 00:45:46.826277 kubelet[2685]: E0813 00:45:46.826164 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c34f6a455dbfb77c40673d57cdfd39a57e8c2b26356906e46d08fe3c2070dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85c585b668-8bc59" Aug 13 00:45:46.826447 kubelet[2685]: E0813 00:45:46.826203 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85c585b668-8bc59_calico-apiserver(4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85c585b668-8bc59_calico-apiserver(4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c34f6a455dbfb77c40673d57cdfd39a57e8c2b26356906e46d08fe3c2070dee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85c585b668-8bc59" podUID="4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f" Aug 13 00:45:46.826503 containerd[1537]: time="2025-08-13T00:45:46.826464095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-l9hj6,Uid:1510219f-b9b7-435f-8d1d-362a5346c27a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8812ee9f8f11004133e6648b2b8bb017a71afc9fd7cdd3ff489df35a8bd6394e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.826752 kubelet[2685]: E0813 00:45:46.826696 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8812ee9f8f11004133e6648b2b8bb017a71afc9fd7cdd3ff489df35a8bd6394e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:45:46.826832 kubelet[2685]: E0813 00:45:46.826752 2685 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8812ee9f8f11004133e6648b2b8bb017a71afc9fd7cdd3ff489df35a8bd6394e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-l9hj6" Aug 13 00:45:46.826832 kubelet[2685]: E0813 00:45:46.826779 2685 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8812ee9f8f11004133e6648b2b8bb017a71afc9fd7cdd3ff489df35a8bd6394e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-l9hj6" Aug 13 00:45:46.827379 kubelet[2685]: E0813 00:45:46.827316 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-l9hj6_calico-system(1510219f-b9b7-435f-8d1d-362a5346c27a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-l9hj6_calico-system(1510219f-b9b7-435f-8d1d-362a5346c27a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8812ee9f8f11004133e6648b2b8bb017a71afc9fd7cdd3ff489df35a8bd6394e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-l9hj6" podUID="1510219f-b9b7-435f-8d1d-362a5346c27a" Aug 13 00:45:53.244552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194366737.mount: Deactivated successfully. Aug 13 00:45:53.283746 containerd[1537]: time="2025-08-13T00:45:53.283677291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:53.285151 containerd[1537]: time="2025-08-13T00:45:53.285084968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:45:53.286152 containerd[1537]: time="2025-08-13T00:45:53.286087871Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:53.300540 containerd[1537]: time="2025-08-13T00:45:53.299753264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:53.301070 containerd[1537]: time="2025-08-13T00:45:53.301011611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.705110792s" Aug 13 00:45:53.301070 containerd[1537]: time="2025-08-13T00:45:53.301067852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:45:53.345688 containerd[1537]: time="2025-08-13T00:45:53.345632485Z" level=info msg="CreateContainer within sandbox \"1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:45:53.375099 containerd[1537]: time="2025-08-13T00:45:53.370892181Z" level=info msg="Container 307c340d2555983369453890560737844aa62c095edc758be29202dec442c525: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:53.376988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704089881.mount: Deactivated successfully. Aug 13 00:45:53.389706 containerd[1537]: time="2025-08-13T00:45:53.389637246Z" level=info msg="CreateContainer within sandbox \"1190aeacafd33c3309c678a3921de4c3c780871e2cce35c5b219c4f75c5374d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"307c340d2555983369453890560737844aa62c095edc758be29202dec442c525\"" Aug 13 00:45:53.390714 containerd[1537]: time="2025-08-13T00:45:53.390669113Z" level=info msg="StartContainer for \"307c340d2555983369453890560737844aa62c095edc758be29202dec442c525\"" Aug 13 00:45:53.396715 containerd[1537]: time="2025-08-13T00:45:53.396031973Z" level=info msg="connecting to shim 307c340d2555983369453890560737844aa62c095edc758be29202dec442c525" address="unix:///run/containerd/s/3b5cf5b1a3639af6c8b5d4f0d705cf8c54526994351ad5e5ea799fbd15496b18" protocol=ttrpc version=3 Aug 13 00:45:53.536676 systemd[1]: Started cri-containerd-307c340d2555983369453890560737844aa62c095edc758be29202dec442c525.scope - libcontainer container 307c340d2555983369453890560737844aa62c095edc758be29202dec442c525. Aug 13 00:45:53.648029 containerd[1537]: time="2025-08-13T00:45:53.647961185Z" level=info msg="StartContainer for \"307c340d2555983369453890560737844aa62c095edc758be29202dec442c525\" returns successfully" Aug 13 00:45:53.894697 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:45:53.895587 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:45:54.209480 kubelet[2685]: I0813 00:45:54.209333 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10c16ae2-09ac-4ae3-871f-5816390e1303-whisker-backend-key-pair\") pod \"10c16ae2-09ac-4ae3-871f-5816390e1303\" (UID: \"10c16ae2-09ac-4ae3-871f-5816390e1303\") " Aug 13 00:45:54.211191 kubelet[2685]: I0813 00:45:54.210172 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9l87\" (UniqueName: \"kubernetes.io/projected/10c16ae2-09ac-4ae3-871f-5816390e1303-kube-api-access-v9l87\") pod \"10c16ae2-09ac-4ae3-871f-5816390e1303\" (UID: \"10c16ae2-09ac-4ae3-871f-5816390e1303\") " Aug 13 00:45:54.211191 kubelet[2685]: I0813 00:45:54.210213 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c16ae2-09ac-4ae3-871f-5816390e1303-whisker-ca-bundle\") pod \"10c16ae2-09ac-4ae3-871f-5816390e1303\" (UID: \"10c16ae2-09ac-4ae3-871f-5816390e1303\") " Aug 13 00:45:54.211191 kubelet[2685]: I0813 00:45:54.210829 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10c16ae2-09ac-4ae3-871f-5816390e1303-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "10c16ae2-09ac-4ae3-871f-5816390e1303" (UID: "10c16ae2-09ac-4ae3-871f-5816390e1303"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:45:54.216635 kubelet[2685]: I0813 00:45:54.216583 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10c16ae2-09ac-4ae3-871f-5816390e1303-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "10c16ae2-09ac-4ae3-871f-5816390e1303" (UID: "10c16ae2-09ac-4ae3-871f-5816390e1303"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:45:54.217475 kubelet[2685]: I0813 00:45:54.217356 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10c16ae2-09ac-4ae3-871f-5816390e1303-kube-api-access-v9l87" (OuterVolumeSpecName: "kube-api-access-v9l87") pod "10c16ae2-09ac-4ae3-871f-5816390e1303" (UID: "10c16ae2-09ac-4ae3-871f-5816390e1303"). InnerVolumeSpecName "kube-api-access-v9l87". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:45:54.244425 systemd[1]: var-lib-kubelet-pods-10c16ae2\x2d09ac\x2d4ae3\x2d871f\x2d5816390e1303-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv9l87.mount: Deactivated successfully. Aug 13 00:45:54.244549 systemd[1]: var-lib-kubelet-pods-10c16ae2\x2d09ac\x2d4ae3\x2d871f\x2d5816390e1303-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:45:54.311007 kubelet[2685]: I0813 00:45:54.310928 2685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v9l87\" (UniqueName: \"kubernetes.io/projected/10c16ae2-09ac-4ae3-871f-5816390e1303-kube-api-access-v9l87\") on node \"ci-4372.1.0-8-f473d4f215\" DevicePath \"\"" Aug 13 00:45:54.311007 kubelet[2685]: I0813 00:45:54.310969 2685 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c16ae2-09ac-4ae3-871f-5816390e1303-whisker-ca-bundle\") on node \"ci-4372.1.0-8-f473d4f215\" DevicePath \"\"" Aug 13 00:45:54.311007 kubelet[2685]: I0813 00:45:54.310979 2685 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10c16ae2-09ac-4ae3-871f-5816390e1303-whisker-backend-key-pair\") on node \"ci-4372.1.0-8-f473d4f215\" DevicePath \"\"" Aug 13 00:45:54.361717 systemd[1]: Removed slice kubepods-besteffort-pod10c16ae2_09ac_4ae3_871f_5816390e1303.slice - libcontainer container kubepods-besteffort-pod10c16ae2_09ac_4ae3_871f_5816390e1303.slice. Aug 13 00:45:54.660156 kubelet[2685]: I0813 00:45:54.658964 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bsx7h" podStartSLOduration=2.227347651 podStartE2EDuration="20.65879424s" podCreationTimestamp="2025-08-13 00:45:34 +0000 UTC" firstStartedPulling="2025-08-13 00:45:34.874563843 +0000 UTC m=+22.711719988" lastFinishedPulling="2025-08-13 00:45:53.306010418 +0000 UTC m=+41.143166577" observedRunningTime="2025-08-13 00:45:54.657000459 +0000 UTC m=+42.494156667" watchObservedRunningTime="2025-08-13 00:45:54.65879424 +0000 UTC m=+42.495950410" Aug 13 00:45:54.770054 systemd[1]: Created slice kubepods-besteffort-pod7187a7a1_1f14_490d_907a_be8e2a4890e7.slice - libcontainer container kubepods-besteffort-pod7187a7a1_1f14_490d_907a_be8e2a4890e7.slice. Aug 13 00:45:54.815068 kubelet[2685]: I0813 00:45:54.814948 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bxcj\" (UniqueName: \"kubernetes.io/projected/7187a7a1-1f14-490d-907a-be8e2a4890e7-kube-api-access-8bxcj\") pod \"whisker-64cc8cb549-n6tbh\" (UID: \"7187a7a1-1f14-490d-907a-be8e2a4890e7\") " pod="calico-system/whisker-64cc8cb549-n6tbh" Aug 13 00:45:54.815068 kubelet[2685]: I0813 00:45:54.815060 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7187a7a1-1f14-490d-907a-be8e2a4890e7-whisker-ca-bundle\") pod \"whisker-64cc8cb549-n6tbh\" (UID: \"7187a7a1-1f14-490d-907a-be8e2a4890e7\") " pod="calico-system/whisker-64cc8cb549-n6tbh" Aug 13 00:45:54.815337 kubelet[2685]: I0813 00:45:54.815106 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7187a7a1-1f14-490d-907a-be8e2a4890e7-whisker-backend-key-pair\") pod \"whisker-64cc8cb549-n6tbh\" (UID: \"7187a7a1-1f14-490d-907a-be8e2a4890e7\") " pod="calico-system/whisker-64cc8cb549-n6tbh" Aug 13 00:45:54.885673 containerd[1537]: time="2025-08-13T00:45:54.885626213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"307c340d2555983369453890560737844aa62c095edc758be29202dec442c525\" id:\"698e1fb6ed13f4c12e68211213a11e6230eb647228bae93fb7135e2a0108bc11\" pid:3817 exit_status:1 exited_at:{seconds:1755045954 nanos:885004541}" Aug 13 00:45:55.075717 containerd[1537]: time="2025-08-13T00:45:55.075653862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64cc8cb549-n6tbh,Uid:7187a7a1-1f14-490d-907a-be8e2a4890e7,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:55.451564 systemd-networkd[1444]: cali54402e7a391: Link UP Aug 13 00:45:55.451749 systemd-networkd[1444]: cali54402e7a391: Gained carrier Aug 13 00:45:55.512975 containerd[1537]: 2025-08-13 00:45:55.130 [INFO][3835] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:45:55.512975 containerd[1537]: 2025-08-13 00:45:55.159 [INFO][3835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0 whisker-64cc8cb549- calico-system 7187a7a1-1f14-490d-907a-be8e2a4890e7 955 0 2025-08-13 00:45:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64cc8cb549 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 whisker-64cc8cb549-n6tbh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali54402e7a391 [] [] }} ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Namespace="calico-system" Pod="whisker-64cc8cb549-n6tbh" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-" Aug 13 00:45:55.512975 containerd[1537]: 2025-08-13 00:45:55.159 [INFO][3835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Namespace="calico-system" Pod="whisker-64cc8cb549-n6tbh" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" Aug 13 00:45:55.512975 containerd[1537]: 2025-08-13 00:45:55.354 [INFO][3843] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" HandleID="k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Workload="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.356 [INFO][3843] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" HandleID="k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Workload="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e210), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-8-f473d4f215", "pod":"whisker-64cc8cb549-n6tbh", "timestamp":"2025-08-13 00:45:55.354348655 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.356 [INFO][3843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.357 [INFO][3843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.357 [INFO][3843] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.372 [INFO][3843] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.391 [INFO][3843] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.399 [INFO][3843] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.403 [INFO][3843] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.513478 containerd[1537]: 2025-08-13 00:45:55.408 [INFO][3843] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.514051 containerd[1537]: 2025-08-13 00:45:55.408 [INFO][3843] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.514051 containerd[1537]: 2025-08-13 00:45:55.412 [INFO][3843] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346 Aug 13 00:45:55.514051 containerd[1537]: 2025-08-13 00:45:55.419 [INFO][3843] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.514051 containerd[1537]: 2025-08-13 00:45:55.428 [INFO][3843] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.65/26] block=192.168.81.64/26 handle="k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.514051 containerd[1537]: 2025-08-13 00:45:55.428 [INFO][3843] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.65/26] handle="k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:55.514051 containerd[1537]: 2025-08-13 00:45:55.428 [INFO][3843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:45:55.514051 containerd[1537]: 2025-08-13 00:45:55.428 [INFO][3843] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.65/26] IPv6=[] ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" HandleID="k8s-pod-network.58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Workload="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" Aug 13 00:45:55.515923 containerd[1537]: 2025-08-13 00:45:55.433 [INFO][3835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Namespace="calico-system" Pod="whisker-64cc8cb549-n6tbh" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0", GenerateName:"whisker-64cc8cb549-", Namespace:"calico-system", SelfLink:"", UID:"7187a7a1-1f14-490d-907a-be8e2a4890e7", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64cc8cb549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"whisker-64cc8cb549-n6tbh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali54402e7a391", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:45:55.515923 containerd[1537]: 2025-08-13 00:45:55.433 [INFO][3835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.65/32] ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Namespace="calico-system" Pod="whisker-64cc8cb549-n6tbh" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" Aug 13 00:45:55.516135 containerd[1537]: 2025-08-13 00:45:55.433 [INFO][3835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54402e7a391 ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Namespace="calico-system" Pod="whisker-64cc8cb549-n6tbh" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" Aug 13 00:45:55.516135 containerd[1537]: 2025-08-13 00:45:55.453 [INFO][3835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Namespace="calico-system" Pod="whisker-64cc8cb549-n6tbh" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" Aug 13 00:45:55.518432 containerd[1537]: 2025-08-13 00:45:55.454 [INFO][3835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Namespace="calico-system" Pod="whisker-64cc8cb549-n6tbh" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0", GenerateName:"whisker-64cc8cb549-", Namespace:"calico-system", SelfLink:"", UID:"7187a7a1-1f14-490d-907a-be8e2a4890e7", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64cc8cb549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346", Pod:"whisker-64cc8cb549-n6tbh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali54402e7a391", MAC:"fe:b2:b1:4d:f4:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:45:55.518584 containerd[1537]: 2025-08-13 00:45:55.507 [INFO][3835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" Namespace="calico-system" Pod="whisker-64cc8cb549-n6tbh" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-whisker--64cc8cb549--n6tbh-eth0" Aug 13 00:45:55.567503 containerd[1537]: time="2025-08-13T00:45:55.567436588Z" level=info msg="connecting to shim 58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346" address="unix:///run/containerd/s/c7cfefb624ba63e65b3e07ff35ac51a0bc47e10971e437e444da2bab59d119e3" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:55.635521 systemd[1]: Started cri-containerd-58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346.scope - libcontainer container 58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346. Aug 13 00:45:55.907040 containerd[1537]: time="2025-08-13T00:45:55.905960021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64cc8cb549-n6tbh,Uid:7187a7a1-1f14-490d-907a-be8e2a4890e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346\"" Aug 13 00:45:55.919706 containerd[1537]: time="2025-08-13T00:45:55.918129763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:45:56.222881 containerd[1537]: time="2025-08-13T00:45:56.222685810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"307c340d2555983369453890560737844aa62c095edc758be29202dec442c525\" id:\"cae2e4119443439ed1e1e7b1cbd4a80863367b6538c2940c422fc1c1af4436cd\" pid:3960 exit_status:1 exited_at:{seconds:1755045956 nanos:222343621}" Aug 13 00:45:56.354358 kubelet[2685]: I0813 00:45:56.354300 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10c16ae2-09ac-4ae3-871f-5816390e1303" path="/var/lib/kubelet/pods/10c16ae2-09ac-4ae3-871f-5816390e1303/volumes" Aug 13 00:45:57.351850 containerd[1537]: time="2025-08-13T00:45:57.351547231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755586b654-x25tm,Uid:18edc429-093e-48c8-bc6b-23bf9a11dc79,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:57.353967 containerd[1537]: time="2025-08-13T00:45:57.353928976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-l9hj6,Uid:1510219f-b9b7-435f-8d1d-362a5346c27a,Namespace:calico-system,Attempt:0,}" Aug 13 00:45:57.455127 systemd-networkd[1444]: cali54402e7a391: Gained IPv6LL Aug 13 00:45:57.646246 containerd[1537]: time="2025-08-13T00:45:57.646016019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:57.647440 containerd[1537]: time="2025-08-13T00:45:57.647155438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 00:45:57.649245 containerd[1537]: time="2025-08-13T00:45:57.648977895Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:57.656459 containerd[1537]: time="2025-08-13T00:45:57.656165412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:45:57.659188 containerd[1537]: time="2025-08-13T00:45:57.658150726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.738804336s" Aug 13 00:45:57.659188 containerd[1537]: time="2025-08-13T00:45:57.659191903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:45:57.668853 systemd-networkd[1444]: calia96bba8d889: Link UP Aug 13 00:45:57.669174 systemd-networkd[1444]: calia96bba8d889: Gained carrier Aug 13 00:45:57.672525 containerd[1537]: time="2025-08-13T00:45:57.671340795Z" level=info msg="CreateContainer within sandbox \"58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:45:57.679852 containerd[1537]: time="2025-08-13T00:45:57.679702303Z" level=info msg="Container 19b96559407e570bde51370f4a3c2dea210e10df7dd92207ae9fa50d408d13e0: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:57.691769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143532630.mount: Deactivated successfully. Aug 13 00:45:57.713433 containerd[1537]: 2025-08-13 00:45:57.452 [INFO][4038] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:45:57.713433 containerd[1537]: 2025-08-13 00:45:57.474 [INFO][4038] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0 calico-kube-controllers-755586b654- calico-system 18edc429-093e-48c8-bc6b-23bf9a11dc79 884 0 2025-08-13 00:45:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:755586b654 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 calico-kube-controllers-755586b654-x25tm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia96bba8d889 [] [] }} ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Namespace="calico-system" Pod="calico-kube-controllers-755586b654-x25tm" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-" Aug 13 00:45:57.713433 containerd[1537]: 2025-08-13 00:45:57.474 [INFO][4038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Namespace="calico-system" Pod="calico-kube-controllers-755586b654-x25tm" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" Aug 13 00:45:57.713433 containerd[1537]: 2025-08-13 00:45:57.573 [INFO][4062] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" HandleID="k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.574 [INFO][4062] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" HandleID="k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003398d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-8-f473d4f215", "pod":"calico-kube-controllers-755586b654-x25tm", "timestamp":"2025-08-13 00:45:57.573986886 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.574 [INFO][4062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.574 [INFO][4062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.574 [INFO][4062] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.588 [INFO][4062] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.598 [INFO][4062] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.611 [INFO][4062] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.616 [INFO][4062] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.713792 containerd[1537]: 2025-08-13 00:45:57.622 [INFO][4062] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.715583 containerd[1537]: 2025-08-13 00:45:57.622 [INFO][4062] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.715583 containerd[1537]: 2025-08-13 00:45:57.626 [INFO][4062] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe Aug 13 00:45:57.715583 containerd[1537]: 2025-08-13 00:45:57.637 [INFO][4062] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.715583 containerd[1537]: 2025-08-13 00:45:57.650 [INFO][4062] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.66/26] block=192.168.81.64/26 handle="k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.715583 containerd[1537]: 2025-08-13 00:45:57.650 [INFO][4062] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.66/26] handle="k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.715583 containerd[1537]: 2025-08-13 00:45:57.650 [INFO][4062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:45:57.715583 containerd[1537]: 2025-08-13 00:45:57.650 [INFO][4062] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.66/26] IPv6=[] ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" HandleID="k8s-pod-network.b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" Aug 13 00:45:57.715869 containerd[1537]: 2025-08-13 00:45:57.658 [INFO][4038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Namespace="calico-system" Pod="calico-kube-controllers-755586b654-x25tm" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0", GenerateName:"calico-kube-controllers-755586b654-", Namespace:"calico-system", SelfLink:"", UID:"18edc429-093e-48c8-bc6b-23bf9a11dc79", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"755586b654", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"calico-kube-controllers-755586b654-x25tm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia96bba8d889", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:45:57.716692 containerd[1537]: 2025-08-13 00:45:57.658 [INFO][4038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.66/32] ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Namespace="calico-system" Pod="calico-kube-controllers-755586b654-x25tm" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" Aug 13 00:45:57.716692 containerd[1537]: 2025-08-13 00:45:57.658 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia96bba8d889 ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Namespace="calico-system" Pod="calico-kube-controllers-755586b654-x25tm" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" Aug 13 00:45:57.716692 containerd[1537]: 2025-08-13 00:45:57.668 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Namespace="calico-system" Pod="calico-kube-controllers-755586b654-x25tm" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" Aug 13 00:45:57.716862 containerd[1537]: 2025-08-13 00:45:57.669 [INFO][4038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Namespace="calico-system" Pod="calico-kube-controllers-755586b654-x25tm" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0", GenerateName:"calico-kube-controllers-755586b654-", Namespace:"calico-system", SelfLink:"", UID:"18edc429-093e-48c8-bc6b-23bf9a11dc79", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"755586b654", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe", Pod:"calico-kube-controllers-755586b654-x25tm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia96bba8d889", MAC:"46:a8:52:58:92:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:45:57.716950 containerd[1537]: 2025-08-13 00:45:57.705 [INFO][4038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" Namespace="calico-system" Pod="calico-kube-controllers-755586b654-x25tm" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--kube--controllers--755586b654--x25tm-eth0" Aug 13 00:45:57.716950 containerd[1537]: time="2025-08-13T00:45:57.713784764Z" level=info msg="CreateContainer within sandbox \"58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"19b96559407e570bde51370f4a3c2dea210e10df7dd92207ae9fa50d408d13e0\"" Aug 13 00:45:57.716950 containerd[1537]: time="2025-08-13T00:45:57.715980102Z" level=info msg="StartContainer for \"19b96559407e570bde51370f4a3c2dea210e10df7dd92207ae9fa50d408d13e0\"" Aug 13 00:45:57.721959 containerd[1537]: time="2025-08-13T00:45:57.721894658Z" level=info msg="connecting to shim 19b96559407e570bde51370f4a3c2dea210e10df7dd92207ae9fa50d408d13e0" address="unix:///run/containerd/s/c7cfefb624ba63e65b3e07ff35ac51a0bc47e10971e437e444da2bab59d119e3" protocol=ttrpc version=3 Aug 13 00:45:57.773403 systemd[1]: Started cri-containerd-19b96559407e570bde51370f4a3c2dea210e10df7dd92207ae9fa50d408d13e0.scope - libcontainer container 19b96559407e570bde51370f4a3c2dea210e10df7dd92207ae9fa50d408d13e0. Aug 13 00:45:57.776778 containerd[1537]: time="2025-08-13T00:45:57.775859778Z" level=info msg="connecting to shim b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe" address="unix:///run/containerd/s/16a66201237bc6630f1edc2b6197cc838ee189c9c1590219e0d263ba11dcb05b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:57.795225 systemd-networkd[1444]: cali5d7dcaaf2e4: Link UP Aug 13 00:45:57.797133 systemd-networkd[1444]: cali5d7dcaaf2e4: Gained carrier Aug 13 00:45:57.827738 containerd[1537]: 2025-08-13 00:45:57.507 [INFO][4048] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:45:57.827738 containerd[1537]: 2025-08-13 00:45:57.535 [INFO][4048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0 goldmane-768f4c5c69- calico-system 1510219f-b9b7-435f-8d1d-362a5346c27a 891 0 2025-08-13 00:45:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 goldmane-768f4c5c69-l9hj6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5d7dcaaf2e4 [] [] }} ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-l9hj6" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-" Aug 13 00:45:57.827738 containerd[1537]: 2025-08-13 00:45:57.535 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-l9hj6" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" Aug 13 00:45:57.827738 containerd[1537]: 2025-08-13 00:45:57.616 [INFO][4070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" HandleID="k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Workload="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.617 [INFO][4070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" HandleID="k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Workload="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-8-f473d4f215", "pod":"goldmane-768f4c5c69-l9hj6", "timestamp":"2025-08-13 00:45:57.616831733 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.617 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.650 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.651 [INFO][4070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.702 [INFO][4070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.713 [INFO][4070] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.732 [INFO][4070] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.737 [INFO][4070] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828236 containerd[1537]: 2025-08-13 00:45:57.742 [INFO][4070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828742 containerd[1537]: 2025-08-13 00:45:57.743 [INFO][4070] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828742 containerd[1537]: 2025-08-13 00:45:57.747 [INFO][4070] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d Aug 13 00:45:57.828742 containerd[1537]: 2025-08-13 00:45:57.758 [INFO][4070] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828742 containerd[1537]: 2025-08-13 00:45:57.785 [INFO][4070] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.67/26] block=192.168.81.64/26 handle="k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828742 containerd[1537]: 2025-08-13 00:45:57.785 [INFO][4070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.67/26] handle="k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:57.828742 containerd[1537]: 2025-08-13 00:45:57.785 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:45:57.828742 containerd[1537]: 2025-08-13 00:45:57.785 [INFO][4070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.67/26] IPv6=[] ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" HandleID="k8s-pod-network.54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Workload="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" Aug 13 00:45:57.828998 containerd[1537]: 2025-08-13 00:45:57.790 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-l9hj6" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1510219f-b9b7-435f-8d1d-362a5346c27a", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"goldmane-768f4c5c69-l9hj6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5d7dcaaf2e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:45:57.828998 containerd[1537]: 2025-08-13 00:45:57.790 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.67/32] ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-l9hj6" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" Aug 13 00:45:57.829169 containerd[1537]: 2025-08-13 00:45:57.791 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d7dcaaf2e4 ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-l9hj6" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" Aug 13 00:45:57.829169 containerd[1537]: 2025-08-13 00:45:57.799 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-l9hj6" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" Aug 13 00:45:57.829410 containerd[1537]: 2025-08-13 00:45:57.800 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-l9hj6" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"1510219f-b9b7-435f-8d1d-362a5346c27a", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d", Pod:"goldmane-768f4c5c69-l9hj6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5d7dcaaf2e4", MAC:"7e:9c:fb:3c:ee:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:45:57.829527 containerd[1537]: 2025-08-13 00:45:57.820 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" Namespace="calico-system" Pod="goldmane-768f4c5c69-l9hj6" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-goldmane--768f4c5c69--l9hj6-eth0" Aug 13 00:45:57.839613 systemd[1]: Started cri-containerd-b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe.scope - libcontainer container b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe. Aug 13 00:45:57.868584 containerd[1537]: time="2025-08-13T00:45:57.868497555Z" level=info msg="connecting to shim 54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d" address="unix:///run/containerd/s/9c2e364f3d4a883edcfa046e9bff3a7d3d33e9191c8f2c09a4d72921ba0ae14b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:57.931508 systemd[1]: Started cri-containerd-54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d.scope - libcontainer container 54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d. Aug 13 00:45:57.932909 containerd[1537]: time="2025-08-13T00:45:57.932523624Z" level=info msg="StartContainer for \"19b96559407e570bde51370f4a3c2dea210e10df7dd92207ae9fa50d408d13e0\" returns successfully" Aug 13 00:45:57.938591 containerd[1537]: time="2025-08-13T00:45:57.938495913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:45:57.956725 containerd[1537]: time="2025-08-13T00:45:57.956646252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755586b654-x25tm,Uid:18edc429-093e-48c8-bc6b-23bf9a11dc79,Namespace:calico-system,Attempt:0,} returns sandbox id \"b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe\"" Aug 13 00:45:58.028577 containerd[1537]: time="2025-08-13T00:45:58.028509887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-l9hj6,Uid:1510219f-b9b7-435f-8d1d-362a5346c27a,Namespace:calico-system,Attempt:0,} returns sandbox id \"54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d\"" Aug 13 00:45:58.350335 kubelet[2685]: E0813 00:45:58.349900 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:58.351105 containerd[1537]: time="2025-08-13T00:45:58.351065276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qks26,Uid:b9850657-9169-4146-bf71-7d92d2abdff2,Namespace:kube-system,Attempt:0,}" Aug 13 00:45:58.558843 systemd-networkd[1444]: cali1fbac6a0d79: Link UP Aug 13 00:45:58.560961 systemd-networkd[1444]: cali1fbac6a0d79: Gained carrier Aug 13 00:45:58.587464 containerd[1537]: 2025-08-13 00:45:58.396 [INFO][4216] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:45:58.587464 containerd[1537]: 2025-08-13 00:45:58.413 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0 coredns-668d6bf9bc- kube-system b9850657-9169-4146-bf71-7d92d2abdff2 888 0 2025-08-13 00:45:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 coredns-668d6bf9bc-qks26 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1fbac6a0d79 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Namespace="kube-system" Pod="coredns-668d6bf9bc-qks26" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-" Aug 13 00:45:58.587464 containerd[1537]: 2025-08-13 00:45:58.413 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Namespace="kube-system" Pod="coredns-668d6bf9bc-qks26" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" Aug 13 00:45:58.587464 containerd[1537]: 2025-08-13 00:45:58.464 [INFO][4228] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" HandleID="k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Workload="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.464 [INFO][4228] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" HandleID="k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Workload="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-8-f473d4f215", "pod":"coredns-668d6bf9bc-qks26", "timestamp":"2025-08-13 00:45:58.464675764 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.464 [INFO][4228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.464 [INFO][4228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.465 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.474 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.487 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.495 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.499 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.588888 containerd[1537]: 2025-08-13 00:45:58.503 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.589563 containerd[1537]: 2025-08-13 00:45:58.503 [INFO][4228] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.589563 containerd[1537]: 2025-08-13 00:45:58.507 [INFO][4228] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88 Aug 13 00:45:58.589563 containerd[1537]: 2025-08-13 00:45:58.516 [INFO][4228] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.589563 containerd[1537]: 2025-08-13 00:45:58.536 [INFO][4228] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.68/26] block=192.168.81.64/26 handle="k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.589563 containerd[1537]: 2025-08-13 00:45:58.536 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.68/26] handle="k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:45:58.589563 containerd[1537]: 2025-08-13 00:45:58.537 [INFO][4228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:45:58.589563 containerd[1537]: 2025-08-13 00:45:58.537 [INFO][4228] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.68/26] IPv6=[] ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" HandleID="k8s-pod-network.23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Workload="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" Aug 13 00:45:58.589942 containerd[1537]: 2025-08-13 00:45:58.541 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Namespace="kube-system" Pod="coredns-668d6bf9bc-qks26" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b9850657-9169-4146-bf71-7d92d2abdff2", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"coredns-668d6bf9bc-qks26", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fbac6a0d79", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:45:58.589942 containerd[1537]: 2025-08-13 00:45:58.541 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.68/32] ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Namespace="kube-system" Pod="coredns-668d6bf9bc-qks26" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" Aug 13 00:45:58.589942 containerd[1537]: 2025-08-13 00:45:58.541 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fbac6a0d79 ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Namespace="kube-system" Pod="coredns-668d6bf9bc-qks26" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" Aug 13 00:45:58.589942 containerd[1537]: 2025-08-13 00:45:58.562 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Namespace="kube-system" Pod="coredns-668d6bf9bc-qks26" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" Aug 13 00:45:58.589942 containerd[1537]: 2025-08-13 00:45:58.565 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Namespace="kube-system" Pod="coredns-668d6bf9bc-qks26" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b9850657-9169-4146-bf71-7d92d2abdff2", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88", Pod:"coredns-668d6bf9bc-qks26", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fbac6a0d79", MAC:"9a:e3:06:71:00:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:45:58.589942 containerd[1537]: 2025-08-13 00:45:58.580 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" Namespace="kube-system" Pod="coredns-668d6bf9bc-qks26" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--qks26-eth0" Aug 13 00:45:58.631717 containerd[1537]: time="2025-08-13T00:45:58.631472471Z" level=info msg="connecting to shim 23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88" address="unix:///run/containerd/s/24f24cf2b40fd5d7d2793ab9c30f7083e0f90d50945812eecda42cdc0935db34" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:45:58.711102 systemd[1]: Started cri-containerd-23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88.scope - libcontainer container 23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88. Aug 13 00:45:58.820904 containerd[1537]: time="2025-08-13T00:45:58.820850190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qks26,Uid:b9850657-9169-4146-bf71-7d92d2abdff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88\"" Aug 13 00:45:58.822352 kubelet[2685]: E0813 00:45:58.822223 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:58.826168 containerd[1537]: time="2025-08-13T00:45:58.826062963Z" level=info msg="CreateContainer within sandbox \"23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:45:58.839728 containerd[1537]: time="2025-08-13T00:45:58.839639740Z" level=info msg="Container 5a49a46c4d7b36fa076cf5f470b625453c93aaa2bc3b5584e8bab7cd462c05a4: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:45:58.856069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3761169741.mount: Deactivated successfully. Aug 13 00:45:58.861492 systemd-networkd[1444]: cali5d7dcaaf2e4: Gained IPv6LL Aug 13 00:45:58.865870 containerd[1537]: time="2025-08-13T00:45:58.865739202Z" level=info msg="CreateContainer within sandbox \"23ae482bb7482222ab50461b3690e8cbf4233aaf2d62e3a4f973c26e9834aa88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a49a46c4d7b36fa076cf5f470b625453c93aaa2bc3b5584e8bab7cd462c05a4\"" Aug 13 00:45:58.867694 containerd[1537]: time="2025-08-13T00:45:58.866559210Z" level=info msg="StartContainer for \"5a49a46c4d7b36fa076cf5f470b625453c93aaa2bc3b5584e8bab7cd462c05a4\"" Aug 13 00:45:58.867694 containerd[1537]: time="2025-08-13T00:45:58.867623809Z" level=info msg="connecting to shim 5a49a46c4d7b36fa076cf5f470b625453c93aaa2bc3b5584e8bab7cd462c05a4" address="unix:///run/containerd/s/24f24cf2b40fd5d7d2793ab9c30f7083e0f90d50945812eecda42cdc0935db34" protocol=ttrpc version=3 Aug 13 00:45:58.902620 systemd[1]: Started cri-containerd-5a49a46c4d7b36fa076cf5f470b625453c93aaa2bc3b5584e8bab7cd462c05a4.scope - libcontainer container 5a49a46c4d7b36fa076cf5f470b625453c93aaa2bc3b5584e8bab7cd462c05a4. Aug 13 00:45:58.925454 systemd-networkd[1444]: calia96bba8d889: Gained IPv6LL Aug 13 00:45:58.957080 containerd[1537]: time="2025-08-13T00:45:58.957038641Z" level=info msg="StartContainer for \"5a49a46c4d7b36fa076cf5f470b625453c93aaa2bc3b5584e8bab7cd462c05a4\" returns successfully" Aug 13 00:45:59.687953 kubelet[2685]: E0813 00:45:59.687814 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:45:59.729547 kubelet[2685]: I0813 00:45:59.729435 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qks26" podStartSLOduration=42.729381571 podStartE2EDuration="42.729381571s" podCreationTimestamp="2025-08-13 00:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:45:59.727113867 +0000 UTC m=+47.564270038" watchObservedRunningTime="2025-08-13 00:45:59.729381571 +0000 UTC m=+47.566537736" Aug 13 00:46:00.353945 containerd[1537]: time="2025-08-13T00:46:00.353459535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db96bd9-hlxr5,Uid:2e97d7c6-fa57-42b5-919c-b271d50e0035,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:46:00.355924 containerd[1537]: time="2025-08-13T00:46:00.355876278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8znj8,Uid:d41150c4-c6b0-44d4-8583-33beadfff0f0,Namespace:calico-system,Attempt:0,}" Aug 13 00:46:00.590223 systemd-networkd[1444]: cali1fbac6a0d79: Gained IPv6LL Aug 13 00:46:00.698756 kubelet[2685]: E0813 00:46:00.698416 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:00.724365 systemd-networkd[1444]: calicd8940459ea: Link UP Aug 13 00:46:00.727735 systemd-networkd[1444]: calicd8940459ea: Gained carrier Aug 13 00:46:00.781381 kubelet[2685]: I0813 00:46:00.780681 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:46:00.787300 kubelet[2685]: E0813 00:46:00.786242 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.442 [INFO][4376] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.474 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0 csi-node-driver- calico-system d41150c4-c6b0-44d4-8583-33beadfff0f0 772 0 2025-08-13 00:45:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 csi-node-driver-8znj8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd8940459ea [] [] }} ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Namespace="calico-system" Pod="csi-node-driver-8znj8" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.474 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Namespace="calico-system" Pod="csi-node-driver-8znj8" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.617 [INFO][4398] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" HandleID="k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Workload="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.617 [INFO][4398] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" HandleID="k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Workload="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a6360), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-8-f473d4f215", "pod":"csi-node-driver-8znj8", "timestamp":"2025-08-13 00:46:00.617615823 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.618 [INFO][4398] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.618 [INFO][4398] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.618 [INFO][4398] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.633 [INFO][4398] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.643 [INFO][4398] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.651 [INFO][4398] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.656 [INFO][4398] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.661 [INFO][4398] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.662 [INFO][4398] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.666 [INFO][4398] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666 Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.677 [INFO][4398] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.695 [INFO][4398] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.69/26] block=192.168.81.64/26 handle="k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.696 [INFO][4398] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.69/26] handle="k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.696 [INFO][4398] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:46:00.809877 containerd[1537]: 2025-08-13 00:46:00.696 [INFO][4398] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.69/26] IPv6=[] ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" HandleID="k8s-pod-network.1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Workload="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" Aug 13 00:46:00.814514 containerd[1537]: 2025-08-13 00:46:00.711 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Namespace="calico-system" Pod="csi-node-driver-8znj8" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d41150c4-c6b0-44d4-8583-33beadfff0f0", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"csi-node-driver-8znj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd8940459ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:00.814514 containerd[1537]: 2025-08-13 00:46:00.711 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.69/32] ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Namespace="calico-system" Pod="csi-node-driver-8znj8" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" Aug 13 00:46:00.814514 containerd[1537]: 2025-08-13 00:46:00.711 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd8940459ea ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Namespace="calico-system" Pod="csi-node-driver-8znj8" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" Aug 13 00:46:00.814514 containerd[1537]: 2025-08-13 00:46:00.730 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Namespace="calico-system" Pod="csi-node-driver-8znj8" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" Aug 13 00:46:00.814514 containerd[1537]: 2025-08-13 00:46:00.732 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Namespace="calico-system" Pod="csi-node-driver-8znj8" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d41150c4-c6b0-44d4-8583-33beadfff0f0", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666", Pod:"csi-node-driver-8znj8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd8940459ea", MAC:"fa:62:32:71:a1:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:00.814514 containerd[1537]: 2025-08-13 00:46:00.792 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" Namespace="calico-system" Pod="csi-node-driver-8znj8" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-csi--node--driver--8znj8-eth0" Aug 13 00:46:00.952782 systemd-networkd[1444]: cali509ccbe4ba9: Link UP Aug 13 00:46:00.958150 systemd-networkd[1444]: cali509ccbe4ba9: Gained carrier Aug 13 00:46:00.962724 containerd[1537]: time="2025-08-13T00:46:00.962648627Z" level=info msg="connecting to shim 1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666" address="unix:///run/containerd/s/860bc42c3f1454448f75cd9df171c15675f2a6cb40857c452cc3ebcc78407b47" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.489 [INFO][4375] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.532 [INFO][4375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0 calico-apiserver-6c7db96bd9- calico-apiserver 2e97d7c6-fa57-42b5-919c-b271d50e0035 887 0 2025-08-13 00:45:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7db96bd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 calico-apiserver-6c7db96bd9-hlxr5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali509ccbe4ba9 [] [] }} ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-hlxr5" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.532 [INFO][4375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-hlxr5" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.620 [INFO][4404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" HandleID="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.621 [INFO][4404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" HandleID="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000351c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-8-f473d4f215", "pod":"calico-apiserver-6c7db96bd9-hlxr5", "timestamp":"2025-08-13 00:46:00.620809053 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.621 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.698 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.698 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.740 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.784 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.811 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.821 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.831 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.832 [INFO][4404] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.838 [INFO][4404] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861 Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.853 [INFO][4404] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.895 [INFO][4404] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.70/26] block=192.168.81.64/26 handle="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.895 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.70/26] handle="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.895 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:46:01.060398 containerd[1537]: 2025-08-13 00:46:00.895 [INFO][4404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.70/26] IPv6=[] ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" HandleID="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:01.061582 containerd[1537]: 2025-08-13 00:46:00.929 [INFO][4375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-hlxr5" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0", GenerateName:"calico-apiserver-6c7db96bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e97d7c6-fa57-42b5-919c-b271d50e0035", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7db96bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"calico-apiserver-6c7db96bd9-hlxr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali509ccbe4ba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:01.061582 containerd[1537]: 2025-08-13 00:46:00.929 [INFO][4375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.70/32] ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-hlxr5" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:01.061582 containerd[1537]: 2025-08-13 00:46:00.929 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali509ccbe4ba9 ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-hlxr5" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:01.061582 containerd[1537]: 2025-08-13 00:46:00.966 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-hlxr5" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:01.061582 containerd[1537]: 2025-08-13 00:46:00.967 [INFO][4375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-hlxr5" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0", GenerateName:"calico-apiserver-6c7db96bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e97d7c6-fa57-42b5-919c-b271d50e0035", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7db96bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861", Pod:"calico-apiserver-6c7db96bd9-hlxr5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali509ccbe4ba9", MAC:"6e:d3:cb:34:ef:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:01.061582 containerd[1537]: 2025-08-13 00:46:01.047 [INFO][4375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-hlxr5" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:01.180339 containerd[1537]: time="2025-08-13T00:46:01.179982949Z" level=info msg="connecting to shim 9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" address="unix:///run/containerd/s/16a48199c1e8caacb91f507da81e4e40efd65f6f8c880a2c53a68ec53188048e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:01.264538 systemd[1]: Started cri-containerd-1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666.scope - libcontainer container 1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666. Aug 13 00:46:01.298977 systemd[1]: Started cri-containerd-9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861.scope - libcontainer container 9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861. Aug 13 00:46:01.352889 containerd[1537]: time="2025-08-13T00:46:01.352641743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db96bd9-xrmsw,Uid:148640d4-e09f-4aaf-9913-6b9ccebe9ef7,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:46:01.355053 containerd[1537]: time="2025-08-13T00:46:01.354126154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c585b668-8bc59,Uid:4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:46:01.747514 containerd[1537]: time="2025-08-13T00:46:01.747240259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8znj8,Uid:d41150c4-c6b0-44d4-8583-33beadfff0f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666\"" Aug 13 00:46:01.761745 kubelet[2685]: E0813 00:46:01.761408 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:01.764748 kubelet[2685]: E0813 00:46:01.763821 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:01.896920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1901691734.mount: Deactivated successfully. Aug 13 00:46:01.939296 containerd[1537]: time="2025-08-13T00:46:01.936419448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db96bd9-hlxr5,Uid:2e97d7c6-fa57-42b5-919c-b271d50e0035,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\"" Aug 13 00:46:02.018222 containerd[1537]: time="2025-08-13T00:46:02.017910034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:02.019809 containerd[1537]: time="2025-08-13T00:46:02.019767163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 00:46:02.020460 containerd[1537]: time="2025-08-13T00:46:02.020424647Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:02.026292 containerd[1537]: time="2025-08-13T00:46:02.024477244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:02.030341 containerd[1537]: time="2025-08-13T00:46:02.028228637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.089681134s" Aug 13 00:46:02.030341 containerd[1537]: time="2025-08-13T00:46:02.030147772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:46:02.036818 containerd[1537]: time="2025-08-13T00:46:02.036392817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:46:02.042936 containerd[1537]: time="2025-08-13T00:46:02.042883455Z" level=info msg="CreateContainer within sandbox \"58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:46:02.082281 containerd[1537]: time="2025-08-13T00:46:02.082187023Z" level=info msg="Container 9acb69e04817e63e079ff189a1add26164270d0bfbd1774665ce55881cc05d6f: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:02.085708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400609664.mount: Deactivated successfully. Aug 13 00:46:02.125133 containerd[1537]: time="2025-08-13T00:46:02.125062258Z" level=info msg="CreateContainer within sandbox \"58ad8b55603b9bf82ac983e6fb096236993c8667c551e323cb96029117fe6346\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9acb69e04817e63e079ff189a1add26164270d0bfbd1774665ce55881cc05d6f\"" Aug 13 00:46:02.128893 containerd[1537]: time="2025-08-13T00:46:02.128031686Z" level=info msg="StartContainer for \"9acb69e04817e63e079ff189a1add26164270d0bfbd1774665ce55881cc05d6f\"" Aug 13 00:46:02.133449 containerd[1537]: time="2025-08-13T00:46:02.132937783Z" level=info msg="connecting to shim 9acb69e04817e63e079ff189a1add26164270d0bfbd1774665ce55881cc05d6f" address="unix:///run/containerd/s/c7cfefb624ba63e65b3e07ff35ac51a0bc47e10971e437e444da2bab59d119e3" protocol=ttrpc version=3 Aug 13 00:46:02.199954 systemd[1]: Started cri-containerd-9acb69e04817e63e079ff189a1add26164270d0bfbd1774665ce55881cc05d6f.scope - libcontainer container 9acb69e04817e63e079ff189a1add26164270d0bfbd1774665ce55881cc05d6f. Aug 13 00:46:02.242886 systemd-networkd[1444]: cali25535357465: Link UP Aug 13 00:46:02.244764 systemd-networkd[1444]: cali25535357465: Gained carrier Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:01.706 [INFO][4522] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:01.752 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0 calico-apiserver-85c585b668- calico-apiserver 4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f 886 0 2025-08-13 00:45:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85c585b668 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 calico-apiserver-85c585b668-8bc59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali25535357465 [] [] }} ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Namespace="calico-apiserver" Pod="calico-apiserver-85c585b668-8bc59" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:01.752 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Namespace="calico-apiserver" Pod="calico-apiserver-85c585b668-8bc59" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.066 [INFO][4564] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" HandleID="k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.066 [INFO][4564] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" HandleID="k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031dea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-8-f473d4f215", "pod":"calico-apiserver-85c585b668-8bc59", "timestamp":"2025-08-13 00:46:02.066082164 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.066 [INFO][4564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.066 [INFO][4564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.066 [INFO][4564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.101 [INFO][4564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.120 [INFO][4564] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.137 [INFO][4564] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.146 [INFO][4564] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.163 [INFO][4564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.164 [INFO][4564] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.174 [INFO][4564] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44 Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.190 [INFO][4564] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.220 [INFO][4564] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.71/26] block=192.168.81.64/26 handle="k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.220 [INFO][4564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.71/26] handle="k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.220 [INFO][4564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:46:02.304014 containerd[1537]: 2025-08-13 00:46:02.220 [INFO][4564] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.71/26] IPv6=[] ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" HandleID="k8s-pod-network.33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" Aug 13 00:46:02.305676 containerd[1537]: 2025-08-13 00:46:02.230 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Namespace="calico-apiserver" Pod="calico-apiserver-85c585b668-8bc59" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0", GenerateName:"calico-apiserver-85c585b668-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c585b668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"calico-apiserver-85c585b668-8bc59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25535357465", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:02.305676 containerd[1537]: 2025-08-13 00:46:02.230 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.71/32] ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Namespace="calico-apiserver" Pod="calico-apiserver-85c585b668-8bc59" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" Aug 13 00:46:02.305676 containerd[1537]: 2025-08-13 00:46:02.230 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25535357465 ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Namespace="calico-apiserver" Pod="calico-apiserver-85c585b668-8bc59" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" Aug 13 00:46:02.305676 containerd[1537]: 2025-08-13 00:46:02.253 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Namespace="calico-apiserver" Pod="calico-apiserver-85c585b668-8bc59" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" Aug 13 00:46:02.305676 containerd[1537]: 2025-08-13 00:46:02.256 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Namespace="calico-apiserver" Pod="calico-apiserver-85c585b668-8bc59" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0", GenerateName:"calico-apiserver-85c585b668-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c585b668", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44", Pod:"calico-apiserver-85c585b668-8bc59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali25535357465", MAC:"5a:88:3f:09:8a:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:02.305676 containerd[1537]: 2025-08-13 00:46:02.296 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" Namespace="calico-apiserver" Pod="calico-apiserver-85c585b668-8bc59" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--85c585b668--8bc59-eth0" Aug 13 00:46:02.354814 kubelet[2685]: E0813 00:46:02.354741 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:02.360771 containerd[1537]: time="2025-08-13T00:46:02.360724445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnwrn,Uid:74bc61f3-1e11-436c-9b6d-9e59c4b52a34,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:02.380271 containerd[1537]: time="2025-08-13T00:46:02.379577263Z" level=info msg="connecting to shim 33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44" address="unix:///run/containerd/s/23b3fcf1d4068ae076ea4a6b27cd2d141352c9535fc24d471f3e20a1769f7ec4" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:02.459770 systemd-networkd[1444]: cali594910c9368: Link UP Aug 13 00:46:02.476851 systemd-networkd[1444]: cali594910c9368: Gained carrier Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:01.671 [INFO][4525] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:01.726 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0 calico-apiserver-6c7db96bd9- calico-apiserver 148640d4-e09f-4aaf-9913-6b9ccebe9ef7 889 0 2025-08-13 00:45:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7db96bd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 calico-apiserver-6c7db96bd9-xrmsw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali594910c9368 [] [] }} ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-xrmsw" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:01.726 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-xrmsw" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.063 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" HandleID="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.068 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" HandleID="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030f140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-8-f473d4f215", "pod":"calico-apiserver-6c7db96bd9-xrmsw", "timestamp":"2025-08-13 00:46:02.063159145 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.068 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.223 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.223 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.275 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.293 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.320 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.329 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.337 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.337 [INFO][4558] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.345 [INFO][4558] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50 Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.356 [INFO][4558] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.401 [INFO][4558] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.72/26] block=192.168.81.64/26 handle="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.403 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.72/26] handle="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.405 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:46:02.508907 containerd[1537]: 2025-08-13 00:46:02.405 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.72/26] IPv6=[] ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" HandleID="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:02.511937 containerd[1537]: 2025-08-13 00:46:02.430 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-xrmsw" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0", GenerateName:"calico-apiserver-6c7db96bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"148640d4-e09f-4aaf-9913-6b9ccebe9ef7", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7db96bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"calico-apiserver-6c7db96bd9-xrmsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali594910c9368", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:02.511937 containerd[1537]: 2025-08-13 00:46:02.431 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.72/32] ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-xrmsw" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:02.511937 containerd[1537]: 2025-08-13 00:46:02.431 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali594910c9368 ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-xrmsw" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:02.511937 containerd[1537]: 2025-08-13 00:46:02.478 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-xrmsw" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:02.511937 containerd[1537]: 2025-08-13 00:46:02.480 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-xrmsw" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0", GenerateName:"calico-apiserver-6c7db96bd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"148640d4-e09f-4aaf-9913-6b9ccebe9ef7", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7db96bd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50", Pod:"calico-apiserver-6c7db96bd9-xrmsw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali594910c9368", MAC:"52:f4:d6:e9:b8:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:02.511937 containerd[1537]: 2025-08-13 00:46:02.501 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Namespace="calico-apiserver" Pod="calico-apiserver-6c7db96bd9-xrmsw" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:02.540577 systemd[1]: Started cri-containerd-33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44.scope - libcontainer container 33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44. Aug 13 00:46:02.663882 containerd[1537]: time="2025-08-13T00:46:02.663135794Z" level=info msg="connecting to shim 654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" address="unix:///run/containerd/s/50ada664086b107fd3303adbfbef307564cf9085e31c6c89495fc68ea0a204a5" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:02.701682 systemd-networkd[1444]: calicd8940459ea: Gained IPv6LL Aug 13 00:46:02.733382 containerd[1537]: time="2025-08-13T00:46:02.733289391Z" level=info msg="StartContainer for \"9acb69e04817e63e079ff189a1add26164270d0bfbd1774665ce55881cc05d6f\" returns successfully" Aug 13 00:46:02.791543 systemd[1]: Started cri-containerd-654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50.scope - libcontainer container 654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50. Aug 13 00:46:02.829601 systemd-networkd[1444]: cali509ccbe4ba9: Gained IPv6LL Aug 13 00:46:02.957520 systemd-networkd[1444]: cali9c3efd9b8a2: Link UP Aug 13 00:46:02.958630 systemd-networkd[1444]: cali9c3efd9b8a2: Gained carrier Aug 13 00:46:03.007390 kubelet[2685]: I0813 00:46:03.005828 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64cc8cb549-n6tbh" podStartSLOduration=2.886241351 podStartE2EDuration="9.005803391s" podCreationTimestamp="2025-08-13 00:45:54 +0000 UTC" firstStartedPulling="2025-08-13 00:45:55.914033119 +0000 UTC m=+43.751189277" lastFinishedPulling="2025-08-13 00:46:02.033595159 +0000 UTC m=+49.870751317" observedRunningTime="2025-08-13 00:46:02.828431315 +0000 UTC m=+50.665587480" watchObservedRunningTime="2025-08-13 00:46:03.005803391 +0000 UTC m=+50.842959558" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.644 [INFO][4628] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.720 [INFO][4628] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0 coredns-668d6bf9bc- kube-system 74bc61f3-1e11-436c-9b6d-9e59c4b52a34 878 0 2025-08-13 00:45:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-8-f473d4f215 coredns-668d6bf9bc-dnwrn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c3efd9b8a2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnwrn" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.720 [INFO][4628] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnwrn" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.877 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" HandleID="k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Workload="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.877 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" HandleID="k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Workload="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ee6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-8-f473d4f215", "pod":"coredns-668d6bf9bc-dnwrn", "timestamp":"2025-08-13 00:46:02.87765178 +0000 UTC"}, Hostname:"ci-4372.1.0-8-f473d4f215", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.877 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.878 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.878 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-8-f473d4f215' Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.890 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.898 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.907 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.912 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.917 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.918 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.921 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23 Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.931 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.948 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.73/26] block=192.168.81.64/26 handle="k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.948 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.73/26] handle="k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" host="ci-4372.1.0-8-f473d4f215" Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.948 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:46:03.041184 containerd[1537]: 2025-08-13 00:46:02.948 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.73/26] IPv6=[] ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" HandleID="k8s-pod-network.628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Workload="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" Aug 13 00:46:03.045395 containerd[1537]: 2025-08-13 00:46:02.952 [INFO][4628] cni-plugin/k8s.go 418: Populated endpoint ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnwrn" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"74bc61f3-1e11-436c-9b6d-9e59c4b52a34", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"", Pod:"coredns-668d6bf9bc-dnwrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c3efd9b8a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:03.045395 containerd[1537]: 2025-08-13 00:46:02.952 [INFO][4628] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.73/32] ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnwrn" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" Aug 13 00:46:03.045395 containerd[1537]: 2025-08-13 00:46:02.953 [INFO][4628] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c3efd9b8a2 ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnwrn" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" Aug 13 00:46:03.045395 containerd[1537]: 2025-08-13 00:46:02.959 [INFO][4628] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnwrn" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" Aug 13 00:46:03.045395 containerd[1537]: 2025-08-13 00:46:02.960 [INFO][4628] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnwrn" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"74bc61f3-1e11-436c-9b6d-9e59c4b52a34", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 45, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-8-f473d4f215", ContainerID:"628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23", Pod:"coredns-668d6bf9bc-dnwrn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c3efd9b8a2", MAC:"b6:74:c3:72:42:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:46:03.045395 containerd[1537]: 2025-08-13 00:46:03.010 [INFO][4628] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-dnwrn" WorkloadEndpoint="ci--4372.1.0--8--f473d4f215-k8s-coredns--668d6bf9bc--dnwrn-eth0" Aug 13 00:46:03.176872 containerd[1537]: time="2025-08-13T00:46:03.176709090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c585b668-8bc59,Uid:4e765ba1-8ab7-49d2-8d08-fb6e10db0a9f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44\"" Aug 13 00:46:03.216483 containerd[1537]: time="2025-08-13T00:46:03.216050587Z" level=info msg="connecting to shim 628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23" address="unix:///run/containerd/s/9c678ad09730d02bf8689042d094ee42ec0cfd373d7da03e1b2be6cc5c25e460" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:03.277567 systemd[1]: Started cri-containerd-628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23.scope - libcontainer container 628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23. Aug 13 00:46:03.341243 containerd[1537]: time="2025-08-13T00:46:03.341190003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7db96bd9-xrmsw,Uid:148640d4-e09f-4aaf-9913-6b9ccebe9ef7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\"" Aug 13 00:46:03.506630 containerd[1537]: time="2025-08-13T00:46:03.505329192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dnwrn,Uid:74bc61f3-1e11-436c-9b6d-9e59c4b52a34,Namespace:kube-system,Attempt:0,} returns sandbox id \"628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23\"" Aug 13 00:46:03.508603 kubelet[2685]: E0813 00:46:03.508156 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:03.515617 containerd[1537]: time="2025-08-13T00:46:03.515551807Z" level=info msg="CreateContainer within sandbox \"628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:46:03.543175 containerd[1537]: time="2025-08-13T00:46:03.543120690Z" level=info msg="Container c558b9f2379c303f7a80c12630d7b1ffab24906ddd70d8bf6e743177160a046d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:03.550836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710879756.mount: Deactivated successfully. Aug 13 00:46:03.562995 containerd[1537]: time="2025-08-13T00:46:03.561473352Z" level=info msg="CreateContainer within sandbox \"628a0af2fe16489c1291db11a0542dcd051b50e05f50bad6b1abe2d255324d23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c558b9f2379c303f7a80c12630d7b1ffab24906ddd70d8bf6e743177160a046d\"" Aug 13 00:46:03.564770 containerd[1537]: time="2025-08-13T00:46:03.564634118Z" level=info msg="StartContainer for \"c558b9f2379c303f7a80c12630d7b1ffab24906ddd70d8bf6e743177160a046d\"" Aug 13 00:46:03.567409 containerd[1537]: time="2025-08-13T00:46:03.567366838Z" level=info msg="connecting to shim c558b9f2379c303f7a80c12630d7b1ffab24906ddd70d8bf6e743177160a046d" address="unix:///run/containerd/s/9c678ad09730d02bf8689042d094ee42ec0cfd373d7da03e1b2be6cc5c25e460" protocol=ttrpc version=3 Aug 13 00:46:03.629977 systemd[1]: Started cri-containerd-c558b9f2379c303f7a80c12630d7b1ffab24906ddd70d8bf6e743177160a046d.scope - libcontainer container c558b9f2379c303f7a80c12630d7b1ffab24906ddd70d8bf6e743177160a046d. Aug 13 00:46:03.664153 systemd-networkd[1444]: cali594910c9368: Gained IPv6LL Aug 13 00:46:03.752770 containerd[1537]: time="2025-08-13T00:46:03.752414817Z" level=info msg="StartContainer for \"c558b9f2379c303f7a80c12630d7b1ffab24906ddd70d8bf6e743177160a046d\" returns successfully" Aug 13 00:46:03.822057 kubelet[2685]: E0813 00:46:03.822014 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:04.109726 systemd-networkd[1444]: cali25535357465: Gained IPv6LL Aug 13 00:46:04.173452 systemd-networkd[1444]: cali9c3efd9b8a2: Gained IPv6LL Aug 13 00:46:04.413154 systemd-networkd[1444]: vxlan.calico: Link UP Aug 13 00:46:04.413169 systemd-networkd[1444]: vxlan.calico: Gained carrier Aug 13 00:46:04.834109 kubelet[2685]: E0813 00:46:04.832827 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:04.860294 kubelet[2685]: I0813 00:46:04.858661 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dnwrn" podStartSLOduration=47.85864197 podStartE2EDuration="47.85864197s" podCreationTimestamp="2025-08-13 00:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:46:03.859218133 +0000 UTC m=+51.696374341" watchObservedRunningTime="2025-08-13 00:46:04.85864197 +0000 UTC m=+52.695798137" Aug 13 00:46:05.837231 kubelet[2685]: E0813 00:46:05.835236 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:06.221621 systemd-networkd[1444]: vxlan.calico: Gained IPv6LL Aug 13 00:46:06.329498 containerd[1537]: time="2025-08-13T00:46:06.328664968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:06.330236 containerd[1537]: time="2025-08-13T00:46:06.330189908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 00:46:06.332785 containerd[1537]: time="2025-08-13T00:46:06.332715981Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:06.341198 containerd[1537]: time="2025-08-13T00:46:06.341002674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:06.343317 containerd[1537]: time="2025-08-13T00:46:06.342796109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.306341321s" Aug 13 00:46:06.343317 containerd[1537]: time="2025-08-13T00:46:06.342850986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:46:06.346873 containerd[1537]: time="2025-08-13T00:46:06.346814632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:46:06.420964 containerd[1537]: time="2025-08-13T00:46:06.420902736Z" level=info msg="CreateContainer within sandbox \"b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:46:06.434297 containerd[1537]: time="2025-08-13T00:46:06.433114959Z" level=info msg="Container 6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:06.451556 containerd[1537]: time="2025-08-13T00:46:06.451226287Z" level=info msg="CreateContainer within sandbox \"b19de84434772b3314c9d4b92fa57af4f35cbccd7f2cdff64f2465d0994f19fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf\"" Aug 13 00:46:06.453812 containerd[1537]: time="2025-08-13T00:46:06.453753024Z" level=info msg="StartContainer for \"6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf\"" Aug 13 00:46:06.457344 containerd[1537]: time="2025-08-13T00:46:06.457276818Z" level=info msg="connecting to shim 6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf" address="unix:///run/containerd/s/16a66201237bc6630f1edc2b6197cc838ee189c9c1590219e0d263ba11dcb05b" protocol=ttrpc version=3 Aug 13 00:46:06.499808 systemd[1]: Started cri-containerd-6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf.scope - libcontainer container 6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf. Aug 13 00:46:06.610084 containerd[1537]: time="2025-08-13T00:46:06.610030837Z" level=info msg="StartContainer for \"6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf\" returns successfully" Aug 13 00:46:07.003336 containerd[1537]: time="2025-08-13T00:46:07.002073007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf\" id:\"a3d532cbdb36ecde3803f13e07a095af7cc97838e5e242768bfdff7b1ea0a7bc\" pid:5005 exited_at:{seconds:1755045966 nanos:990125811}" Aug 13 00:46:07.046168 kubelet[2685]: I0813 00:46:07.046062 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-755586b654-x25tm" podStartSLOduration=24.662086563 podStartE2EDuration="33.046035946s" podCreationTimestamp="2025-08-13 00:45:34 +0000 UTC" firstStartedPulling="2025-08-13 00:45:57.960999304 +0000 UTC m=+45.798155461" lastFinishedPulling="2025-08-13 00:46:06.344948686 +0000 UTC m=+54.182104844" observedRunningTime="2025-08-13 00:46:06.912300832 +0000 UTC m=+54.749457006" watchObservedRunningTime="2025-08-13 00:46:07.046035946 +0000 UTC m=+54.883192458" Aug 13 00:46:08.801629 systemd[1]: Started sshd@7-146.190.133.69:22-139.178.68.195:38700.service - OpenSSH per-connection server daemon (139.178.68.195:38700). Aug 13 00:46:08.896144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464953646.mount: Deactivated successfully. Aug 13 00:46:09.048300 sshd[5022]: Accepted publickey for core from 139.178.68.195 port 38700 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:09.052096 sshd-session[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:09.069211 systemd-logind[1516]: New session 8 of user core. Aug 13 00:46:09.075687 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:46:10.109062 sshd[5029]: Connection closed by 139.178.68.195 port 38700 Aug 13 00:46:10.109399 sshd-session[5022]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:10.121730 systemd[1]: sshd@7-146.190.133.69:22-139.178.68.195:38700.service: Deactivated successfully. Aug 13 00:46:10.131459 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:46:10.142579 systemd-logind[1516]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:46:10.147886 systemd-logind[1516]: Removed session 8. Aug 13 00:46:10.522077 containerd[1537]: time="2025-08-13T00:46:10.521925051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:10.524507 containerd[1537]: time="2025-08-13T00:46:10.524419456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 00:46:10.526931 containerd[1537]: time="2025-08-13T00:46:10.525193553Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:10.535141 containerd[1537]: time="2025-08-13T00:46:10.533494541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:10.535141 containerd[1537]: time="2025-08-13T00:46:10.534808193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.187942457s" Aug 13 00:46:10.535141 containerd[1537]: time="2025-08-13T00:46:10.534856940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:46:10.538409 containerd[1537]: time="2025-08-13T00:46:10.537391880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:46:10.543710 containerd[1537]: time="2025-08-13T00:46:10.543595757Z" level=info msg="CreateContainer within sandbox \"54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:46:10.560284 containerd[1537]: time="2025-08-13T00:46:10.558757681Z" level=info msg="Container 2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:10.566123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2796504730.mount: Deactivated successfully. Aug 13 00:46:10.575162 containerd[1537]: time="2025-08-13T00:46:10.575040174Z" level=info msg="CreateContainer within sandbox \"54679bdd17f3625002410ac69a3f92e8eda98fe2407727bdbfb1f02fa9850c9d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12\"" Aug 13 00:46:10.576675 containerd[1537]: time="2025-08-13T00:46:10.576396940Z" level=info msg="StartContainer for \"2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12\"" Aug 13 00:46:10.577997 containerd[1537]: time="2025-08-13T00:46:10.577921340Z" level=info msg="connecting to shim 2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12" address="unix:///run/containerd/s/9c2e364f3d4a883edcfa046e9bff3a7d3d33e9191c8f2c09a4d72921ba0ae14b" protocol=ttrpc version=3 Aug 13 00:46:10.694526 systemd[1]: Started cri-containerd-2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12.scope - libcontainer container 2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12. Aug 13 00:46:10.797219 containerd[1537]: time="2025-08-13T00:46:10.797062355Z" level=info msg="StartContainer for \"2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12\" returns successfully" Aug 13 00:46:11.006160 kubelet[2685]: I0813 00:46:11.003878 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-l9hj6" podStartSLOduration=25.45352571 podStartE2EDuration="37.957135042s" podCreationTimestamp="2025-08-13 00:45:33 +0000 UTC" firstStartedPulling="2025-08-13 00:45:58.033467481 +0000 UTC m=+45.870623625" lastFinishedPulling="2025-08-13 00:46:10.537076796 +0000 UTC m=+58.374232957" observedRunningTime="2025-08-13 00:46:10.951731726 +0000 UTC m=+58.788887897" watchObservedRunningTime="2025-08-13 00:46:10.957135042 +0000 UTC m=+58.794291209" Aug 13 00:46:11.164414 containerd[1537]: time="2025-08-13T00:46:11.164372978Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12\" id:\"26c50b313b186b2a1ed2d69aa23cda5d3f20e99e1db74338eda58550e72690b0\" pid:5104 exit_status:1 exited_at:{seconds:1755045971 nanos:163659796}" Aug 13 00:46:12.057294 containerd[1537]: time="2025-08-13T00:46:12.057229162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12\" id:\"a72e1547e68e5c69554dce17f43543a4eb78dd2bf979ec410ec71b2a671140a0\" pid:5127 exit_status:1 exited_at:{seconds:1755045972 nanos:50338346}" Aug 13 00:46:12.317836 containerd[1537]: time="2025-08-13T00:46:12.317385343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:12.322087 containerd[1537]: time="2025-08-13T00:46:12.321909826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 00:46:12.325588 containerd[1537]: time="2025-08-13T00:46:12.325444525Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:12.338999 containerd[1537]: time="2025-08-13T00:46:12.338892498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:12.344657 containerd[1537]: time="2025-08-13T00:46:12.344417162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.806974638s" Aug 13 00:46:12.344657 containerd[1537]: time="2025-08-13T00:46:12.344543847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:46:12.479584 containerd[1537]: time="2025-08-13T00:46:12.479461522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:46:12.497998 containerd[1537]: time="2025-08-13T00:46:12.497021565Z" level=info msg="CreateContainer within sandbox \"1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:46:12.535889 containerd[1537]: time="2025-08-13T00:46:12.535817193Z" level=info msg="Container b997f42330a99fb91132d696a4f84843cdee23ac4674fb3d29a37e744ecf0be9: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:12.601153 containerd[1537]: time="2025-08-13T00:46:12.600962551Z" level=info msg="CreateContainer within sandbox \"1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b997f42330a99fb91132d696a4f84843cdee23ac4674fb3d29a37e744ecf0be9\"" Aug 13 00:46:12.602634 containerd[1537]: time="2025-08-13T00:46:12.602585597Z" level=info msg="StartContainer for \"b997f42330a99fb91132d696a4f84843cdee23ac4674fb3d29a37e744ecf0be9\"" Aug 13 00:46:12.608523 containerd[1537]: time="2025-08-13T00:46:12.608466179Z" level=info msg="connecting to shim b997f42330a99fb91132d696a4f84843cdee23ac4674fb3d29a37e744ecf0be9" address="unix:///run/containerd/s/860bc42c3f1454448f75cd9df171c15675f2a6cb40857c452cc3ebcc78407b47" protocol=ttrpc version=3 Aug 13 00:46:12.648587 systemd[1]: Started cri-containerd-b997f42330a99fb91132d696a4f84843cdee23ac4674fb3d29a37e744ecf0be9.scope - libcontainer container b997f42330a99fb91132d696a4f84843cdee23ac4674fb3d29a37e744ecf0be9. Aug 13 00:46:12.719576 containerd[1537]: time="2025-08-13T00:46:12.719515918Z" level=info msg="StartContainer for \"b997f42330a99fb91132d696a4f84843cdee23ac4674fb3d29a37e744ecf0be9\" returns successfully" Aug 13 00:46:13.047707 containerd[1537]: time="2025-08-13T00:46:13.047576210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12\" id:\"6acb789259a5354217280f13c27ed0078a34772ab6d5092fea38a83dbdf06f54\" pid:5185 exit_status:1 exited_at:{seconds:1755045973 nanos:47070637}" Aug 13 00:46:15.132933 systemd[1]: Started sshd@8-146.190.133.69:22-139.178.68.195:34660.service - OpenSSH per-connection server daemon (139.178.68.195:34660). Aug 13 00:46:15.303371 sshd[5197]: Accepted publickey for core from 139.178.68.195 port 34660 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:15.308056 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:15.325421 systemd-logind[1516]: New session 9 of user core. Aug 13 00:46:15.329606 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:46:16.078612 containerd[1537]: time="2025-08-13T00:46:16.078544998Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12\" id:\"19f87e5aaa9a3460e88d4f6a76947a7f4268c7e4525635ece8484717cb4df7e5\" pid:5231 exited_at:{seconds:1755045976 nanos:72334964}" Aug 13 00:46:16.157630 sshd[5203]: Connection closed by 139.178.68.195 port 34660 Aug 13 00:46:16.160296 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:16.175301 systemd[1]: sshd@8-146.190.133.69:22-139.178.68.195:34660.service: Deactivated successfully. Aug 13 00:46:16.182025 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:46:16.186239 systemd-logind[1516]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:46:16.190091 systemd-logind[1516]: Removed session 9. Aug 13 00:46:17.138650 containerd[1537]: time="2025-08-13T00:46:17.138574318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:17.139955 containerd[1537]: time="2025-08-13T00:46:17.139856015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 00:46:17.141268 containerd[1537]: time="2025-08-13T00:46:17.141131355Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:17.164338 containerd[1537]: time="2025-08-13T00:46:17.164224598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:17.166240 containerd[1537]: time="2025-08-13T00:46:17.166049736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.686531944s" Aug 13 00:46:17.166240 containerd[1537]: time="2025-08-13T00:46:17.166102986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:46:17.168979 containerd[1537]: time="2025-08-13T00:46:17.168605913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:46:17.178412 containerd[1537]: time="2025-08-13T00:46:17.176844125Z" level=info msg="CreateContainer within sandbox \"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:46:17.218880 containerd[1537]: time="2025-08-13T00:46:17.217593203Z" level=info msg="Container 705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:17.225636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862499889.mount: Deactivated successfully. Aug 13 00:46:17.239617 containerd[1537]: time="2025-08-13T00:46:17.239529643Z" level=info msg="CreateContainer within sandbox \"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\"" Aug 13 00:46:17.241884 containerd[1537]: time="2025-08-13T00:46:17.241777375Z" level=info msg="StartContainer for \"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\"" Aug 13 00:46:17.248362 containerd[1537]: time="2025-08-13T00:46:17.248305583Z" level=info msg="connecting to shim 705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828" address="unix:///run/containerd/s/16a48199c1e8caacb91f507da81e4e40efd65f6f8c880a2c53a68ec53188048e" protocol=ttrpc version=3 Aug 13 00:46:17.297531 systemd[1]: Started cri-containerd-705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828.scope - libcontainer container 705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828. Aug 13 00:46:17.425977 containerd[1537]: time="2025-08-13T00:46:17.425751917Z" level=info msg="StartContainer for \"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" returns successfully" Aug 13 00:46:17.634579 containerd[1537]: time="2025-08-13T00:46:17.634494051Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:17.635868 containerd[1537]: time="2025-08-13T00:46:17.635765919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:46:17.639314 containerd[1537]: time="2025-08-13T00:46:17.639240244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 470.584119ms" Aug 13 00:46:17.639678 containerd[1537]: time="2025-08-13T00:46:17.639528034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:46:17.642844 containerd[1537]: time="2025-08-13T00:46:17.642792044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:46:17.647832 containerd[1537]: time="2025-08-13T00:46:17.647481283Z" level=info msg="CreateContainer within sandbox \"33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:46:17.667743 containerd[1537]: time="2025-08-13T00:46:17.665188513Z" level=info msg="Container d3e366c13b6fd53731cc85d18a96119130ebcba1fbb975fd4bf729a23e4b59f3: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:17.678404 containerd[1537]: time="2025-08-13T00:46:17.677112052Z" level=info msg="CreateContainer within sandbox \"33dc0c0397a92a9dcc2ab1b2f42c76d31c6047511eb11636f3cb6cd0b37cbf44\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3e366c13b6fd53731cc85d18a96119130ebcba1fbb975fd4bf729a23e4b59f3\"" Aug 13 00:46:17.681550 containerd[1537]: time="2025-08-13T00:46:17.681472544Z" level=info msg="StartContainer for \"d3e366c13b6fd53731cc85d18a96119130ebcba1fbb975fd4bf729a23e4b59f3\"" Aug 13 00:46:17.684042 containerd[1537]: time="2025-08-13T00:46:17.683943356Z" level=info msg="connecting to shim d3e366c13b6fd53731cc85d18a96119130ebcba1fbb975fd4bf729a23e4b59f3" address="unix:///run/containerd/s/23b3fcf1d4068ae076ea4a6b27cd2d141352c9535fc24d471f3e20a1769f7ec4" protocol=ttrpc version=3 Aug 13 00:46:17.730797 systemd[1]: Started cri-containerd-d3e366c13b6fd53731cc85d18a96119130ebcba1fbb975fd4bf729a23e4b59f3.scope - libcontainer container d3e366c13b6fd53731cc85d18a96119130ebcba1fbb975fd4bf729a23e4b59f3. Aug 13 00:46:17.852506 containerd[1537]: time="2025-08-13T00:46:17.852456260Z" level=info msg="StartContainer for \"d3e366c13b6fd53731cc85d18a96119130ebcba1fbb975fd4bf729a23e4b59f3\" returns successfully" Aug 13 00:46:18.010000 kubelet[2685]: I0813 00:46:18.009790 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c7db96bd9-hlxr5" podStartSLOduration=32.786192825 podStartE2EDuration="48.009768312s" podCreationTimestamp="2025-08-13 00:45:30 +0000 UTC" firstStartedPulling="2025-08-13 00:46:01.944145222 +0000 UTC m=+49.781301381" lastFinishedPulling="2025-08-13 00:46:17.167720708 +0000 UTC m=+65.004876868" observedRunningTime="2025-08-13 00:46:18.005960549 +0000 UTC m=+65.843116709" watchObservedRunningTime="2025-08-13 00:46:18.009768312 +0000 UTC m=+65.846924478" Aug 13 00:46:18.049204 kubelet[2685]: I0813 00:46:18.048326 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85c585b668-8bc59" podStartSLOduration=32.594311704 podStartE2EDuration="47.046565874s" podCreationTimestamp="2025-08-13 00:45:31 +0000 UTC" firstStartedPulling="2025-08-13 00:46:03.189963407 +0000 UTC m=+51.027119571" lastFinishedPulling="2025-08-13 00:46:17.642217581 +0000 UTC m=+65.479373741" observedRunningTime="2025-08-13 00:46:18.04371727 +0000 UTC m=+65.880873435" watchObservedRunningTime="2025-08-13 00:46:18.046565874 +0000 UTC m=+65.883722039" Aug 13 00:46:18.058165 containerd[1537]: time="2025-08-13T00:46:18.056226089Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:18.060305 containerd[1537]: time="2025-08-13T00:46:18.059488058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:46:18.066173 containerd[1537]: time="2025-08-13T00:46:18.065653713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 422.804183ms" Aug 13 00:46:18.066443 containerd[1537]: time="2025-08-13T00:46:18.066418646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:46:18.071547 containerd[1537]: time="2025-08-13T00:46:18.071502051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:46:18.076341 containerd[1537]: time="2025-08-13T00:46:18.076236482Z" level=info msg="CreateContainer within sandbox \"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:46:18.094506 containerd[1537]: time="2025-08-13T00:46:18.094447585Z" level=info msg="Container cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:18.117769 containerd[1537]: time="2025-08-13T00:46:18.117700495Z" level=info msg="CreateContainer within sandbox \"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\"" Aug 13 00:46:18.121654 containerd[1537]: time="2025-08-13T00:46:18.121586446Z" level=info msg="StartContainer for \"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\"" Aug 13 00:46:18.130731 containerd[1537]: time="2025-08-13T00:46:18.130676030Z" level=info msg="connecting to shim cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b" address="unix:///run/containerd/s/50ada664086b107fd3303adbfbef307564cf9085e31c6c89495fc68ea0a204a5" protocol=ttrpc version=3 Aug 13 00:46:18.175876 systemd[1]: Started cri-containerd-cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b.scope - libcontainer container cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b. Aug 13 00:46:18.334799 containerd[1537]: time="2025-08-13T00:46:18.334743722Z" level=info msg="StartContainer for \"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" returns successfully" Aug 13 00:46:18.952352 kubelet[2685]: I0813 00:46:18.952280 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:46:19.950087 kubelet[2685]: I0813 00:46:19.949920 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:46:20.320684 kubelet[2685]: I0813 00:46:20.320155 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c7db96bd9-xrmsw" podStartSLOduration=35.600858279 podStartE2EDuration="50.320122583s" podCreationTimestamp="2025-08-13 00:45:30 +0000 UTC" firstStartedPulling="2025-08-13 00:46:03.349647604 +0000 UTC m=+51.186803761" lastFinishedPulling="2025-08-13 00:46:18.068911904 +0000 UTC m=+65.906068065" observedRunningTime="2025-08-13 00:46:18.985627924 +0000 UTC m=+66.822784091" watchObservedRunningTime="2025-08-13 00:46:20.320122583 +0000 UTC m=+68.157278746" Aug 13 00:46:20.857470 containerd[1537]: time="2025-08-13T00:46:20.857403257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:20.858567 containerd[1537]: time="2025-08-13T00:46:20.858173969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 00:46:20.861285 containerd[1537]: time="2025-08-13T00:46:20.859512746Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:20.864334 containerd[1537]: time="2025-08-13T00:46:20.864214198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.792166649s" Aug 13 00:46:20.864334 containerd[1537]: time="2025-08-13T00:46:20.864285257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:46:20.870135 containerd[1537]: time="2025-08-13T00:46:20.867684022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:20.882741 containerd[1537]: time="2025-08-13T00:46:20.882684055Z" level=info msg="CreateContainer within sandbox \"1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:46:20.972291 containerd[1537]: time="2025-08-13T00:46:20.968684571Z" level=info msg="Container bed4146414969fe8e5cd7de01caae69e9c6cad29c6935e94f9353a25c841607e: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:20.988295 containerd[1537]: time="2025-08-13T00:46:20.987565381Z" level=info msg="CreateContainer within sandbox \"1a8b9147733c327a2b6a574aac8e4fdde62c2d1e61ef9142fbc1816fe60bb666\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bed4146414969fe8e5cd7de01caae69e9c6cad29c6935e94f9353a25c841607e\"" Aug 13 00:46:20.990000 containerd[1537]: time="2025-08-13T00:46:20.989956197Z" level=info msg="StartContainer for \"bed4146414969fe8e5cd7de01caae69e9c6cad29c6935e94f9353a25c841607e\"" Aug 13 00:46:20.997063 containerd[1537]: time="2025-08-13T00:46:20.996958114Z" level=info msg="connecting to shim bed4146414969fe8e5cd7de01caae69e9c6cad29c6935e94f9353a25c841607e" address="unix:///run/containerd/s/860bc42c3f1454448f75cd9df171c15675f2a6cb40857c452cc3ebcc78407b47" protocol=ttrpc version=3 Aug 13 00:46:21.064636 systemd[1]: Started cri-containerd-bed4146414969fe8e5cd7de01caae69e9c6cad29c6935e94f9353a25c841607e.scope - libcontainer container bed4146414969fe8e5cd7de01caae69e9c6cad29c6935e94f9353a25c841607e. Aug 13 00:46:21.185055 systemd[1]: Started sshd@9-146.190.133.69:22-139.178.68.195:51344.service - OpenSSH per-connection server daemon (139.178.68.195:51344). Aug 13 00:46:21.304074 containerd[1537]: time="2025-08-13T00:46:21.303980970Z" level=info msg="StartContainer for \"bed4146414969fe8e5cd7de01caae69e9c6cad29c6935e94f9353a25c841607e\" returns successfully" Aug 13 00:46:21.388744 sshd[5392]: Accepted publickey for core from 139.178.68.195 port 51344 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:21.395048 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:21.409602 systemd-logind[1516]: New session 10 of user core. Aug 13 00:46:21.415487 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:46:22.146065 kubelet[2685]: I0813 00:46:22.126742 2685 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:46:22.154133 kubelet[2685]: I0813 00:46:22.152556 2685 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:46:22.535385 kubelet[2685]: I0813 00:46:22.528973 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8znj8" podStartSLOduration=29.429307896 podStartE2EDuration="48.528944492s" podCreationTimestamp="2025-08-13 00:45:34 +0000 UTC" firstStartedPulling="2025-08-13 00:46:01.768566617 +0000 UTC m=+49.605722775" lastFinishedPulling="2025-08-13 00:46:20.868203211 +0000 UTC m=+68.705359371" observedRunningTime="2025-08-13 00:46:22.388304095 +0000 UTC m=+70.225460260" watchObservedRunningTime="2025-08-13 00:46:22.528944492 +0000 UTC m=+70.366100658" Aug 13 00:46:22.824054 sshd[5408]: Connection closed by 139.178.68.195 port 51344 Aug 13 00:46:22.824957 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:22.848727 systemd[1]: sshd@9-146.190.133.69:22-139.178.68.195:51344.service: Deactivated successfully. Aug 13 00:46:22.857118 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:46:22.860574 systemd-logind[1516]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:46:22.868977 systemd[1]: Started sshd@10-146.190.133.69:22-139.178.68.195:51346.service - OpenSSH per-connection server daemon (139.178.68.195:51346). Aug 13 00:46:22.884702 systemd-logind[1516]: Removed session 10. Aug 13 00:46:22.991544 sshd[5422]: Accepted publickey for core from 139.178.68.195 port 51346 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:22.995039 sshd-session[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:23.002712 systemd-logind[1516]: New session 11 of user core. Aug 13 00:46:23.015135 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:46:23.360768 kubelet[2685]: E0813 00:46:23.360709 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:23.377288 kubelet[2685]: E0813 00:46:23.376074 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:23.402476 sshd[5424]: Connection closed by 139.178.68.195 port 51346 Aug 13 00:46:23.407233 sshd-session[5422]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:23.423091 systemd[1]: sshd@10-146.190.133.69:22-139.178.68.195:51346.service: Deactivated successfully. Aug 13 00:46:23.431324 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:46:23.435399 systemd-logind[1516]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:46:23.446611 systemd[1]: Started sshd@11-146.190.133.69:22-139.178.68.195:51348.service - OpenSSH per-connection server daemon (139.178.68.195:51348). Aug 13 00:46:23.449138 systemd-logind[1516]: Removed session 11. Aug 13 00:46:23.550289 sshd[5433]: Accepted publickey for core from 139.178.68.195 port 51348 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:23.554128 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:23.562828 systemd-logind[1516]: New session 12 of user core. Aug 13 00:46:23.572020 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:46:23.886620 sshd[5435]: Connection closed by 139.178.68.195 port 51348 Aug 13 00:46:23.888231 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:23.892862 systemd[1]: sshd@11-146.190.133.69:22-139.178.68.195:51348.service: Deactivated successfully. Aug 13 00:46:23.896169 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:46:23.899177 systemd-logind[1516]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:46:23.902040 systemd-logind[1516]: Removed session 12. Aug 13 00:46:25.373532 kubelet[2685]: E0813 00:46:25.373461 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:26.029490 containerd[1537]: time="2025-08-13T00:46:26.029424335Z" level=info msg="TaskExit event in podsandbox handler container_id:\"307c340d2555983369453890560737844aa62c095edc758be29202dec442c525\" id:\"dbfee86a351f18e59d87d743dad23d7cbf1812dc73b84ebee2003594d814a33f\" pid:5471 exit_status:1 exited_at:{seconds:1755045986 nanos:14959355}" Aug 13 00:46:28.595290 kubelet[2685]: I0813 00:46:28.594646 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:46:28.721604 kubelet[2685]: I0813 00:46:28.721400 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:46:28.834649 containerd[1537]: time="2025-08-13T00:46:28.834600890Z" level=info msg="StopContainer for \"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" with timeout 30 (s)" Aug 13 00:46:28.843827 containerd[1537]: time="2025-08-13T00:46:28.842954277Z" level=info msg="Stop container \"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" with signal terminated" Aug 13 00:46:28.903607 systemd[1]: cri-containerd-cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b.scope: Deactivated successfully. Aug 13 00:46:28.905906 systemd[1]: cri-containerd-cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b.scope: Consumed 1.562s CPU time, 46.6M memory peak, 604K read from disk. Aug 13 00:46:28.912972 systemd[1]: Started sshd@12-146.190.133.69:22-139.178.68.195:51362.service - OpenSSH per-connection server daemon (139.178.68.195:51362). Aug 13 00:46:28.932848 containerd[1537]: time="2025-08-13T00:46:28.932528316Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" id:\"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" pid:5337 exit_status:1 exited_at:{seconds:1755045988 nanos:927072996}" Aug 13 00:46:28.933344 containerd[1537]: time="2025-08-13T00:46:28.933190862Z" level=info msg="received exit event container_id:\"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" id:\"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" pid:5337 exit_status:1 exited_at:{seconds:1755045988 nanos:927072996}" Aug 13 00:46:29.043593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b-rootfs.mount: Deactivated successfully. Aug 13 00:46:29.092815 containerd[1537]: time="2025-08-13T00:46:29.092760123Z" level=info msg="StopContainer for \"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" returns successfully" Aug 13 00:46:29.107864 containerd[1537]: time="2025-08-13T00:46:29.107695752Z" level=info msg="StopPodSandbox for \"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\"" Aug 13 00:46:29.122682 containerd[1537]: time="2025-08-13T00:46:29.122601680Z" level=info msg="Container to stop \"cb7d4cf1dc2995a384d274026088d30f156ed7e6f29c74f2decbc69b567bc78b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:46:29.135749 systemd[1]: cri-containerd-654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50.scope: Deactivated successfully. Aug 13 00:46:29.146872 containerd[1537]: time="2025-08-13T00:46:29.146661276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\" id:\"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\" pid:4731 exit_status:137 exited_at:{seconds:1755045989 nanos:145973548}" Aug 13 00:46:29.149765 sshd[5496]: Accepted publickey for core from 139.178.68.195 port 51362 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:29.158875 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:29.177314 systemd-logind[1516]: New session 13 of user core. Aug 13 00:46:29.181538 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:46:29.234956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50-rootfs.mount: Deactivated successfully. Aug 13 00:46:29.302565 containerd[1537]: time="2025-08-13T00:46:29.302495093Z" level=info msg="shim disconnected" id=654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50 namespace=k8s.io Aug 13 00:46:29.302565 containerd[1537]: time="2025-08-13T00:46:29.302543539Z" level=warning msg="cleaning up after shim disconnected" id=654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50 namespace=k8s.io Aug 13 00:46:29.315977 containerd[1537]: time="2025-08-13T00:46:29.302575213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:46:29.496831 containerd[1537]: time="2025-08-13T00:46:29.496097234Z" level=info msg="received exit event sandbox_id:\"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\" exit_status:137 exited_at:{seconds:1755045989 nanos:145973548}" Aug 13 00:46:29.503753 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50-shm.mount: Deactivated successfully. Aug 13 00:46:29.862742 systemd-networkd[1444]: cali594910c9368: Link DOWN Aug 13 00:46:29.862753 systemd-networkd[1444]: cali594910c9368: Lost carrier Aug 13 00:46:30.015041 sshd[5531]: Connection closed by 139.178.68.195 port 51362 Aug 13 00:46:30.021313 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:30.036362 systemd-logind[1516]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:46:30.041455 systemd[1]: sshd@12-146.190.133.69:22-139.178.68.195:51362.service: Deactivated successfully. Aug 13 00:46:30.050769 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:46:30.057580 systemd-logind[1516]: Removed session 13. Aug 13 00:46:30.229914 kubelet[2685]: I0813 00:46:30.229580 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:29.833 [INFO][5566] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:29.835 [INFO][5566] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" iface="eth0" netns="/var/run/netns/cni-188f5e0f-6a06-6a2b-32e6-b84560263557" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:29.836 [INFO][5566] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" iface="eth0" netns="/var/run/netns/cni-188f5e0f-6a06-6a2b-32e6-b84560263557" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:29.864 [INFO][5566] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" after=24.880263ms iface="eth0" netns="/var/run/netns/cni-188f5e0f-6a06-6a2b-32e6-b84560263557" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:29.868 [INFO][5566] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:29.876 [INFO][5566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:30.166 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" HandleID="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:30.170 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:30.171 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:30.292 [INFO][5576] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" HandleID="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:30.293 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" HandleID="k8s-pod-network.654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--xrmsw-eth0" Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:30.295 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:46:30.305045 containerd[1537]: 2025-08-13 00:46:30.300 [INFO][5566] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50" Aug 13 00:46:30.313539 containerd[1537]: time="2025-08-13T00:46:30.305486365Z" level=info msg="TearDown network for sandbox \"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\" successfully" Aug 13 00:46:30.313539 containerd[1537]: time="2025-08-13T00:46:30.305541331Z" level=info msg="StopPodSandbox for \"654e2e2f3e60565afb9166188639cd8689b51a789e51bd3c0e2514f54dfd2f50\" returns successfully" Aug 13 00:46:30.315837 systemd[1]: run-netns-cni\x2d188f5e0f\x2d6a06\x2d6a2b\x2d32e6\x2db84560263557.mount: Deactivated successfully. Aug 13 00:46:30.480418 kubelet[2685]: I0813 00:46:30.480163 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/148640d4-e09f-4aaf-9913-6b9ccebe9ef7-calico-apiserver-certs\") pod \"148640d4-e09f-4aaf-9913-6b9ccebe9ef7\" (UID: \"148640d4-e09f-4aaf-9913-6b9ccebe9ef7\") " Aug 13 00:46:30.480418 kubelet[2685]: I0813 00:46:30.480277 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr9hz\" (UniqueName: \"kubernetes.io/projected/148640d4-e09f-4aaf-9913-6b9ccebe9ef7-kube-api-access-tr9hz\") pod \"148640d4-e09f-4aaf-9913-6b9ccebe9ef7\" (UID: \"148640d4-e09f-4aaf-9913-6b9ccebe9ef7\") " Aug 13 00:46:30.532404 systemd[1]: var-lib-kubelet-pods-148640d4\x2de09f\x2d4aaf\x2d9913\x2d6b9ccebe9ef7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtr9hz.mount: Deactivated successfully. Aug 13 00:46:30.533053 systemd[1]: var-lib-kubelet-pods-148640d4\x2de09f\x2d4aaf\x2d9913\x2d6b9ccebe9ef7-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 00:46:30.539022 kubelet[2685]: I0813 00:46:30.533977 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148640d4-e09f-4aaf-9913-6b9ccebe9ef7-kube-api-access-tr9hz" (OuterVolumeSpecName: "kube-api-access-tr9hz") pod "148640d4-e09f-4aaf-9913-6b9ccebe9ef7" (UID: "148640d4-e09f-4aaf-9913-6b9ccebe9ef7"). InnerVolumeSpecName "kube-api-access-tr9hz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:46:30.539309 kubelet[2685]: I0813 00:46:30.533974 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148640d4-e09f-4aaf-9913-6b9ccebe9ef7-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "148640d4-e09f-4aaf-9913-6b9ccebe9ef7" (UID: "148640d4-e09f-4aaf-9913-6b9ccebe9ef7"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:46:30.587174 kubelet[2685]: I0813 00:46:30.587097 2685 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/148640d4-e09f-4aaf-9913-6b9ccebe9ef7-calico-apiserver-certs\") on node \"ci-4372.1.0-8-f473d4f215\" DevicePath \"\"" Aug 13 00:46:30.587174 kubelet[2685]: I0813 00:46:30.587196 2685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tr9hz\" (UniqueName: \"kubernetes.io/projected/148640d4-e09f-4aaf-9913-6b9ccebe9ef7-kube-api-access-tr9hz\") on node \"ci-4372.1.0-8-f473d4f215\" DevicePath \"\"" Aug 13 00:46:31.244411 systemd[1]: Removed slice kubepods-besteffort-pod148640d4_e09f_4aaf_9913_6b9ccebe9ef7.slice - libcontainer container kubepods-besteffort-pod148640d4_e09f_4aaf_9913_6b9ccebe9ef7.slice. Aug 13 00:46:31.245119 systemd[1]: kubepods-besteffort-pod148640d4_e09f_4aaf_9913_6b9ccebe9ef7.slice: Consumed 1.618s CPU time, 46.9M memory peak, 604K read from disk. Aug 13 00:46:32.383015 kubelet[2685]: I0813 00:46:32.382957 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148640d4-e09f-4aaf-9913-6b9ccebe9ef7" path="/var/lib/kubelet/pods/148640d4-e09f-4aaf-9913-6b9ccebe9ef7/volumes" Aug 13 00:46:34.792700 containerd[1537]: time="2025-08-13T00:46:34.792581417Z" level=info msg="StopContainer for \"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" with timeout 30 (s)" Aug 13 00:46:34.795030 containerd[1537]: time="2025-08-13T00:46:34.794985612Z" level=info msg="Stop container \"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" with signal terminated" Aug 13 00:46:34.876821 systemd[1]: cri-containerd-705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828.scope: Deactivated successfully. Aug 13 00:46:34.877323 systemd[1]: cri-containerd-705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828.scope: Consumed 1.214s CPU time, 57.6M memory peak, 4.5M read from disk. Aug 13 00:46:34.882754 containerd[1537]: time="2025-08-13T00:46:34.882524493Z" level=info msg="received exit event container_id:\"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" id:\"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" pid:5265 exit_status:1 exited_at:{seconds:1755045994 nanos:881513803}" Aug 13 00:46:34.884559 containerd[1537]: time="2025-08-13T00:46:34.883247116Z" level=info msg="TaskExit event in podsandbox handler container_id:\"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" id:\"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" pid:5265 exit_status:1 exited_at:{seconds:1755045994 nanos:881513803}" Aug 13 00:46:34.938627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828-rootfs.mount: Deactivated successfully. Aug 13 00:46:34.952554 containerd[1537]: time="2025-08-13T00:46:34.952462358Z" level=info msg="StopContainer for \"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" returns successfully" Aug 13 00:46:34.953861 containerd[1537]: time="2025-08-13T00:46:34.953778453Z" level=info msg="StopPodSandbox for \"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\"" Aug 13 00:46:34.954655 containerd[1537]: time="2025-08-13T00:46:34.954191736Z" level=info msg="Container to stop \"705553c6bf0bcd02bf73c2193e311f0ae0e7924207ce3f1541acbd34b8104828\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:46:34.968715 systemd[1]: cri-containerd-9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861.scope: Deactivated successfully. Aug 13 00:46:34.976907 containerd[1537]: time="2025-08-13T00:46:34.976500285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\" id:\"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\" pid:4517 exit_status:137 exited_at:{seconds:1755045994 nanos:975482397}" Aug 13 00:46:35.035977 systemd[1]: Started sshd@13-146.190.133.69:22-139.178.68.195:56886.service - OpenSSH per-connection server daemon (139.178.68.195:56886). Aug 13 00:46:35.048467 containerd[1537]: time="2025-08-13T00:46:35.046025378Z" level=info msg="shim disconnected" id=9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861 namespace=k8s.io Aug 13 00:46:35.049694 containerd[1537]: time="2025-08-13T00:46:35.049566368Z" level=warning msg="cleaning up after shim disconnected" id=9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861 namespace=k8s.io Aug 13 00:46:35.049694 containerd[1537]: time="2025-08-13T00:46:35.049623236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:46:35.049817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861-rootfs.mount: Deactivated successfully. Aug 13 00:46:35.123302 containerd[1537]: time="2025-08-13T00:46:35.119028395Z" level=info msg="received exit event sandbox_id:\"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\" exit_status:137 exited_at:{seconds:1755045994 nanos:975482397}" Aug 13 00:46:35.126650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861-shm.mount: Deactivated successfully. Aug 13 00:46:35.234988 sshd[5648]: Accepted publickey for core from 139.178.68.195 port 56886 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:35.239040 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:35.254190 systemd-logind[1516]: New session 14 of user core. Aug 13 00:46:35.260878 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:46:35.269128 kubelet[2685]: I0813 00:46:35.269087 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Aug 13 00:46:35.323498 systemd-networkd[1444]: cali509ccbe4ba9: Link DOWN Aug 13 00:46:35.323507 systemd-networkd[1444]: cali509ccbe4ba9: Lost carrier Aug 13 00:46:35.565978 sshd[5681]: Connection closed by 139.178.68.195 port 56886 Aug 13 00:46:35.567553 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:35.578441 systemd[1]: sshd@13-146.190.133.69:22-139.178.68.195:56886.service: Deactivated successfully. Aug 13 00:46:35.589954 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.320 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.320 [INFO][5674] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" iface="eth0" netns="/var/run/netns/cni-e972d671-0104-432f-d8e3-918376bdc579" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.321 [INFO][5674] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" iface="eth0" netns="/var/run/netns/cni-e972d671-0104-432f-d8e3-918376bdc579" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.336 [INFO][5674] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" after=15.978815ms iface="eth0" netns="/var/run/netns/cni-e972d671-0104-432f-d8e3-918376bdc579" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.336 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.336 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.405 [INFO][5688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" HandleID="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.408 [INFO][5688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.408 [INFO][5688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.551 [INFO][5688] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" HandleID="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.551 [INFO][5688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" HandleID="k8s-pod-network.9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Workload="ci--4372.1.0--8--f473d4f215-k8s-calico--apiserver--6c7db96bd9--hlxr5-eth0" Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.567 [INFO][5688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:46:35.593489 containerd[1537]: 2025-08-13 00:46:35.574 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861" Aug 13 00:46:35.596432 systemd-logind[1516]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:46:35.598150 containerd[1537]: time="2025-08-13T00:46:35.597661059Z" level=info msg="TearDown network for sandbox \"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\" successfully" Aug 13 00:46:35.598150 containerd[1537]: time="2025-08-13T00:46:35.597740023Z" level=info msg="StopPodSandbox for \"9a402ca065d4745e3eb8b42474e4aff60af892ee1d333629dd29099c97f5e861\" returns successfully" Aug 13 00:46:35.605199 systemd[1]: run-netns-cni\x2de972d671\x2d0104\x2d432f\x2dd8e3\x2d918376bdc579.mount: Deactivated successfully. Aug 13 00:46:35.612641 systemd-logind[1516]: Removed session 14. Aug 13 00:46:35.729713 kubelet[2685]: I0813 00:46:35.729666 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e97d7c6-fa57-42b5-919c-b271d50e0035-calico-apiserver-certs\") pod \"2e97d7c6-fa57-42b5-919c-b271d50e0035\" (UID: \"2e97d7c6-fa57-42b5-919c-b271d50e0035\") " Aug 13 00:46:35.730073 kubelet[2685]: I0813 00:46:35.729997 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzq4l\" (UniqueName: \"kubernetes.io/projected/2e97d7c6-fa57-42b5-919c-b271d50e0035-kube-api-access-bzq4l\") pod \"2e97d7c6-fa57-42b5-919c-b271d50e0035\" (UID: \"2e97d7c6-fa57-42b5-919c-b271d50e0035\") " Aug 13 00:46:35.743245 kubelet[2685]: I0813 00:46:35.743163 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e97d7c6-fa57-42b5-919c-b271d50e0035-kube-api-access-bzq4l" (OuterVolumeSpecName: "kube-api-access-bzq4l") pod "2e97d7c6-fa57-42b5-919c-b271d50e0035" (UID: "2e97d7c6-fa57-42b5-919c-b271d50e0035"). InnerVolumeSpecName "kube-api-access-bzq4l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:46:35.745218 kubelet[2685]: I0813 00:46:35.745153 2685 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e97d7c6-fa57-42b5-919c-b271d50e0035-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "2e97d7c6-fa57-42b5-919c-b271d50e0035" (UID: "2e97d7c6-fa57-42b5-919c-b271d50e0035"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:46:35.745621 systemd[1]: var-lib-kubelet-pods-2e97d7c6\x2dfa57\x2d42b5\x2d919c\x2db271d50e0035-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbzq4l.mount: Deactivated successfully. Aug 13 00:46:35.834923 kubelet[2685]: I0813 00:46:35.834726 2685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bzq4l\" (UniqueName: \"kubernetes.io/projected/2e97d7c6-fa57-42b5-919c-b271d50e0035-kube-api-access-bzq4l\") on node \"ci-4372.1.0-8-f473d4f215\" DevicePath \"\"" Aug 13 00:46:35.834923 kubelet[2685]: I0813 00:46:35.834782 2685 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e97d7c6-fa57-42b5-919c-b271d50e0035-calico-apiserver-certs\") on node \"ci-4372.1.0-8-f473d4f215\" DevicePath \"\"" Aug 13 00:46:35.938854 systemd[1]: var-lib-kubelet-pods-2e97d7c6\x2dfa57\x2d42b5\x2d919c\x2db271d50e0035-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 00:46:36.282857 systemd[1]: Removed slice kubepods-besteffort-pod2e97d7c6_fa57_42b5_919c_b271d50e0035.slice - libcontainer container kubepods-besteffort-pod2e97d7c6_fa57_42b5_919c_b271d50e0035.slice. Aug 13 00:46:36.283416 systemd[1]: kubepods-besteffort-pod2e97d7c6_fa57_42b5_919c_b271d50e0035.slice: Consumed 1.276s CPU time, 57.8M memory peak, 4.5M read from disk. Aug 13 00:46:36.356425 kubelet[2685]: I0813 00:46:36.356342 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e97d7c6-fa57-42b5-919c-b271d50e0035" path="/var/lib/kubelet/pods/2e97d7c6-fa57-42b5-919c-b271d50e0035/volumes" Aug 13 00:46:37.072773 containerd[1537]: time="2025-08-13T00:46:37.072718084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c4dd04decdf91a2931be5e610ccb4f854f7ce1166f36dc0ba2df509b6e254cf\" id:\"1e6bf103e33baf87cec6fdc8776a5f6d578be0615058e133ffb21fa837351267\" pid:5722 exited_at:{seconds:1755045997 nanos:71555959}" Aug 13 00:46:38.352869 kubelet[2685]: E0813 00:46:38.352819 2685 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:40.585893 systemd[1]: Started sshd@14-146.190.133.69:22-139.178.68.195:37964.service - OpenSSH per-connection server daemon (139.178.68.195:37964). Aug 13 00:46:40.732954 sshd[5733]: Accepted publickey for core from 139.178.68.195 port 37964 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:40.735935 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:40.744775 systemd-logind[1516]: New session 15 of user core. Aug 13 00:46:40.752576 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:46:41.462465 sshd[5737]: Connection closed by 139.178.68.195 port 37964 Aug 13 00:46:41.463320 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:41.477534 systemd[1]: sshd@14-146.190.133.69:22-139.178.68.195:37964.service: Deactivated successfully. Aug 13 00:46:41.480502 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:46:41.481862 systemd-logind[1516]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:46:41.486369 systemd[1]: Started sshd@15-146.190.133.69:22-139.178.68.195:37968.service - OpenSSH per-connection server daemon (139.178.68.195:37968). Aug 13 00:46:41.487805 systemd-logind[1516]: Removed session 15. Aug 13 00:46:41.586320 sshd[5750]: Accepted publickey for core from 139.178.68.195 port 37968 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:41.588512 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:41.594932 systemd-logind[1516]: New session 16 of user core. Aug 13 00:46:41.607543 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:46:42.020510 sshd[5752]: Connection closed by 139.178.68.195 port 37968 Aug 13 00:46:42.019894 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:42.035698 systemd[1]: sshd@15-146.190.133.69:22-139.178.68.195:37968.service: Deactivated successfully. Aug 13 00:46:42.038236 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:46:42.039545 systemd-logind[1516]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:46:42.045091 systemd[1]: Started sshd@16-146.190.133.69:22-139.178.68.195:37974.service - OpenSSH per-connection server daemon (139.178.68.195:37974). Aug 13 00:46:42.049574 systemd-logind[1516]: Removed session 16. Aug 13 00:46:42.171231 sshd[5762]: Accepted publickey for core from 139.178.68.195 port 37974 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:42.175742 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:42.187550 systemd-logind[1516]: New session 17 of user core. Aug 13 00:46:42.193548 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:46:43.520155 sshd[5764]: Connection closed by 139.178.68.195 port 37974 Aug 13 00:46:43.538391 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:43.556957 systemd[1]: Started sshd@17-146.190.133.69:22-139.178.68.195:37986.service - OpenSSH per-connection server daemon (139.178.68.195:37986). Aug 13 00:46:43.577830 systemd[1]: sshd@16-146.190.133.69:22-139.178.68.195:37974.service: Deactivated successfully. Aug 13 00:46:43.585016 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:46:43.597808 systemd-logind[1516]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:46:43.604681 systemd-logind[1516]: Removed session 17. Aug 13 00:46:43.737322 sshd[5798]: Accepted publickey for core from 139.178.68.195 port 37986 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:43.743755 sshd-session[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:43.756535 systemd-logind[1516]: New session 18 of user core. Aug 13 00:46:43.761609 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:46:43.777148 containerd[1537]: time="2025-08-13T00:46:43.776145287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d3f700fd40dca72e08b1c19a65b459de4b03641613122a160db59e57b91ad12\" id:\"891719ec0a27c0c478130144ede3c0a36f6354f69b2fbc361f56851216a29fa5\" pid:5785 exited_at:{seconds:1755046003 nanos:775585129}" Aug 13 00:46:44.720337 sshd[5805]: Connection closed by 139.178.68.195 port 37986 Aug 13 00:46:44.721473 sshd-session[5798]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:44.736892 systemd[1]: sshd@17-146.190.133.69:22-139.178.68.195:37986.service: Deactivated successfully. Aug 13 00:46:44.744270 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:46:44.746534 systemd-logind[1516]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:46:44.755244 systemd[1]: Started sshd@18-146.190.133.69:22-139.178.68.195:37992.service - OpenSSH per-connection server daemon (139.178.68.195:37992). Aug 13 00:46:44.758932 systemd-logind[1516]: Removed session 18. Aug 13 00:46:44.848466 sshd[5817]: Accepted publickey for core from 139.178.68.195 port 37992 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:44.851078 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:44.858537 systemd-logind[1516]: New session 19 of user core. Aug 13 00:46:44.872594 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:46:45.028927 sshd[5819]: Connection closed by 139.178.68.195 port 37992 Aug 13 00:46:45.029982 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:45.037003 systemd[1]: sshd@18-146.190.133.69:22-139.178.68.195:37992.service: Deactivated successfully. Aug 13 00:46:45.040152 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:46:45.042037 systemd-logind[1516]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:46:45.044419 systemd-logind[1516]: Removed session 19. Aug 13 00:46:50.055092 systemd[1]: Started sshd@19-146.190.133.69:22-139.178.68.195:36528.service - OpenSSH per-connection server daemon (139.178.68.195:36528). Aug 13 00:46:50.153303 sshd[5843]: Accepted publickey for core from 139.178.68.195 port 36528 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:50.157964 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:50.165780 systemd-logind[1516]: New session 20 of user core. Aug 13 00:46:50.180655 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:46:50.490534 sshd[5845]: Connection closed by 139.178.68.195 port 36528 Aug 13 00:46:50.491515 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:50.497494 systemd[1]: sshd@19-146.190.133.69:22-139.178.68.195:36528.service: Deactivated successfully. Aug 13 00:46:50.502178 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:46:50.506437 systemd-logind[1516]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:46:50.509011 systemd-logind[1516]: Removed session 20. Aug 13 00:46:55.512387 systemd[1]: Started sshd@20-146.190.133.69:22-139.178.68.195:36532.service - OpenSSH per-connection server daemon (139.178.68.195:36532). Aug 13 00:46:55.591823 sshd[5857]: Accepted publickey for core from 139.178.68.195 port 36532 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:55.594186 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:55.602691 systemd-logind[1516]: New session 21 of user core. Aug 13 00:46:55.607628 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:46:55.947244 sshd[5859]: Connection closed by 139.178.68.195 port 36532 Aug 13 00:46:55.948796 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:55.960829 systemd[1]: sshd@20-146.190.133.69:22-139.178.68.195:36532.service: Deactivated successfully. Aug 13 00:46:55.967213 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:46:55.971480 systemd-logind[1516]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:46:55.974871 systemd-logind[1516]: Removed session 21. Aug 13 00:46:55.983691 containerd[1537]: time="2025-08-13T00:46:55.983590431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"307c340d2555983369453890560737844aa62c095edc758be29202dec442c525\" id:\"97446fc76bb1eb2b045011daa5fba5a212edf38cf65dddcd38d0d4e51d577efe\" pid:5878 exited_at:{seconds:1755046015 nanos:982416803}" Aug 13 00:47:00.963266 systemd[1]: Started sshd@21-146.190.133.69:22-139.178.68.195:45304.service - OpenSSH per-connection server daemon (139.178.68.195:45304). Aug 13 00:47:01.106222 sshd[5897]: Accepted publickey for core from 139.178.68.195 port 45304 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:47:01.109552 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:47:01.116744 systemd-logind[1516]: New session 22 of user core. Aug 13 00:47:01.126815 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:47:01.661522 sshd[5899]: Connection closed by 139.178.68.195 port 45304 Aug 13 00:47:01.665931 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:01.677463 systemd-logind[1516]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:47:01.678851 systemd[1]: sshd@21-146.190.133.69:22-139.178.68.195:45304.service: Deactivated successfully. Aug 13 00:47:01.687815 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:47:01.698964 systemd-logind[1516]: Removed session 22.