Aug 13 00:46:25.903225 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 00:46:25.903263 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:25.903275 kernel: BIOS-provided physical RAM map: Aug 13 00:46:25.903282 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:46:25.903289 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:46:25.903296 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:46:25.903303 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 00:46:25.903316 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 00:46:25.903326 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:46:25.903333 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:46:25.903340 kernel: NX (Execute Disable) protection: active Aug 13 00:46:25.903347 kernel: APIC: Static calls initialized Aug 13 00:46:25.903354 kernel: SMBIOS 2.8 present. Aug 13 00:46:25.903361 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 00:46:25.903372 kernel: DMI: Memory slots populated: 1/1 Aug 13 00:46:25.903380 kernel: Hypervisor detected: KVM Aug 13 00:46:25.903392 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:46:25.903404 kernel: kvm-clock: using sched offset of 4354360419 cycles Aug 13 00:46:25.903412 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:46:25.903445 kernel: tsc: Detected 2494.140 MHz processor Aug 13 00:46:25.903453 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:46:25.903461 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:46:25.903469 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 00:46:25.903481 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:46:25.903489 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:46:25.903497 kernel: ACPI: Early table checksum verification disabled Aug 13 00:46:25.903505 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 00:46:25.903513 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:25.903521 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:25.903529 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:25.903536 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:46:25.903544 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:25.903555 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:25.903563 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:25.903571 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:46:25.903578 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 00:46:25.903586 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 00:46:25.903594 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:46:25.903602 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 00:46:25.903610 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 00:46:25.903624 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 00:46:25.903632 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 00:46:25.903640 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:46:25.903649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 00:46:25.903657 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Aug 13 00:46:25.903668 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Aug 13 00:46:25.903677 kernel: Zone ranges: Aug 13 00:46:25.903691 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:46:25.903703 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 00:46:25.903714 kernel: Normal empty Aug 13 00:46:25.903725 kernel: Device empty Aug 13 00:46:25.903737 kernel: Movable zone start for each node Aug 13 00:46:25.903749 kernel: Early memory node ranges Aug 13 00:46:25.903760 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:46:25.903772 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 00:46:25.903787 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 00:46:25.903795 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:46:25.903803 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:46:25.903812 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 00:46:25.903820 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:46:25.903828 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:46:25.903840 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:46:25.903850 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:46:25.903864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:46:25.903879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:46:25.903892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:46:25.903904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:46:25.903916 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:46:25.903926 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:46:25.903934 kernel: TSC deadline timer available Aug 13 00:46:25.903942 kernel: CPU topo: Max. logical packages: 1 Aug 13 00:46:25.903951 kernel: CPU topo: Max. logical dies: 1 Aug 13 00:46:25.903959 kernel: CPU topo: Max. dies per package: 1 Aug 13 00:46:25.903970 kernel: CPU topo: Max. threads per core: 1 Aug 13 00:46:25.903978 kernel: CPU topo: Num. cores per package: 2 Aug 13 00:46:25.903986 kernel: CPU topo: Num. threads per package: 2 Aug 13 00:46:25.903995 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 00:46:25.904003 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:46:25.904012 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 00:46:25.904020 kernel: Booting paravirtualized kernel on KVM Aug 13 00:46:25.904028 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:46:25.904037 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:46:25.904045 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 00:46:25.904056 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 00:46:25.904064 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:46:25.904073 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 00:46:25.904082 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:25.904091 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:46:25.904099 kernel: random: crng init done Aug 13 00:46:25.904108 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:46:25.904116 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:46:25.904127 kernel: Fallback order for Node 0: 0 Aug 13 00:46:25.904136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Aug 13 00:46:25.904144 kernel: Policy zone: DMA32 Aug 13 00:46:25.904152 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:46:25.904160 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:46:25.904169 kernel: Kernel/User page tables isolation: enabled Aug 13 00:46:25.904177 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 00:46:25.904185 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 00:46:25.904193 kernel: Dynamic Preempt: voluntary Aug 13 00:46:25.904204 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:46:25.904213 kernel: rcu: RCU event tracing is enabled. Aug 13 00:46:25.904222 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:46:25.904230 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:46:25.904239 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:46:25.904247 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:46:25.904255 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:46:25.904263 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:46:25.904272 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:25.904285 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:25.904294 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:46:25.904302 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:46:25.904310 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:46:25.904318 kernel: Console: colour VGA+ 80x25 Aug 13 00:46:25.904326 kernel: printk: legacy console [tty0] enabled Aug 13 00:46:25.904335 kernel: printk: legacy console [ttyS0] enabled Aug 13 00:46:25.904343 kernel: ACPI: Core revision 20240827 Aug 13 00:46:25.904351 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:46:25.904371 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:46:25.904380 kernel: x2apic enabled Aug 13 00:46:25.904388 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:46:25.904400 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:46:25.904411 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:46:25.904442 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Aug 13 00:46:25.904454 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 00:46:25.904467 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 00:46:25.904480 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:46:25.904498 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:46:25.904513 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:46:25.904528 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:46:25.904542 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:46:25.904558 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:46:25.904571 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 00:46:25.904583 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:46:25.904602 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:46:25.904616 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:46:25.904632 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:46:25.904647 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:46:25.904662 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:46:25.904678 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:46:25.904693 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:46:25.904708 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:46:25.904722 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 00:46:25.904740 kernel: landlock: Up and running. Aug 13 00:46:25.904756 kernel: SELinux: Initializing. Aug 13 00:46:25.904771 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:46:25.904786 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:46:25.904801 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 00:46:25.904816 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 00:46:25.904832 kernel: signal: max sigframe size: 1776 Aug 13 00:46:25.904847 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:46:25.904862 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:46:25.904880 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 00:46:25.904896 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:46:25.904911 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:46:25.904925 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:46:25.904944 kernel: .... node #0, CPUs: #1 Aug 13 00:46:25.904958 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:46:25.904972 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Aug 13 00:46:25.904987 kernel: Memory: 1966908K/2096612K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 125140K reserved, 0K cma-reserved) Aug 13 00:46:25.905001 kernel: devtmpfs: initialized Aug 13 00:46:25.905020 kernel: x86/mm: Memory block size: 128MB Aug 13 00:46:25.905035 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:46:25.905051 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:46:25.905066 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:46:25.905081 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:46:25.905096 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:46:25.905112 kernel: audit: type=2000 audit(1755045982.015:1): state=initialized audit_enabled=0 res=1 Aug 13 00:46:25.905127 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:46:25.905142 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:46:25.905160 kernel: cpuidle: using governor menu Aug 13 00:46:25.905175 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:46:25.905190 kernel: dca service started, version 1.12.1 Aug 13 00:46:25.905205 kernel: PCI: Using configuration type 1 for base access Aug 13 00:46:25.905220 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:46:25.905236 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:46:25.905251 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:46:25.905266 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:46:25.905281 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:46:25.905299 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:46:25.905312 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:46:25.905324 kernel: ACPI: Interpreter enabled Aug 13 00:46:25.905338 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:46:25.905347 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:46:25.905360 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:46:25.905369 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:46:25.905380 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 00:46:25.905392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:46:25.905690 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:46:25.906554 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 13 00:46:25.906668 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 13 00:46:25.906681 kernel: acpiphp: Slot [3] registered Aug 13 00:46:25.906691 kernel: acpiphp: Slot [4] registered Aug 13 00:46:25.906700 kernel: acpiphp: Slot [5] registered Aug 13 00:46:25.906709 kernel: acpiphp: Slot [6] registered Aug 13 00:46:25.906724 kernel: acpiphp: Slot [7] registered Aug 13 00:46:25.906733 kernel: acpiphp: Slot [8] registered Aug 13 00:46:25.906742 kernel: acpiphp: Slot [9] registered Aug 13 00:46:25.906751 kernel: acpiphp: Slot [10] registered Aug 13 00:46:25.906760 kernel: acpiphp: Slot [11] registered Aug 13 00:46:25.906769 kernel: acpiphp: Slot [12] registered Aug 13 00:46:25.906778 kernel: acpiphp: Slot [13] registered Aug 13 00:46:25.906787 kernel: acpiphp: Slot [14] registered Aug 13 00:46:25.906796 kernel: acpiphp: Slot [15] registered Aug 13 00:46:25.906806 kernel: acpiphp: Slot [16] registered Aug 13 00:46:25.906818 kernel: acpiphp: Slot [17] registered Aug 13 00:46:25.906827 kernel: acpiphp: Slot [18] registered Aug 13 00:46:25.906836 kernel: acpiphp: Slot [19] registered Aug 13 00:46:25.906844 kernel: acpiphp: Slot [20] registered Aug 13 00:46:25.906853 kernel: acpiphp: Slot [21] registered Aug 13 00:46:25.906863 kernel: acpiphp: Slot [22] registered Aug 13 00:46:25.906871 kernel: acpiphp: Slot [23] registered Aug 13 00:46:25.906880 kernel: acpiphp: Slot [24] registered Aug 13 00:46:25.906889 kernel: acpiphp: Slot [25] registered Aug 13 00:46:25.906901 kernel: acpiphp: Slot [26] registered Aug 13 00:46:25.906909 kernel: acpiphp: Slot [27] registered Aug 13 00:46:25.906918 kernel: acpiphp: Slot [28] registered Aug 13 00:46:25.906928 kernel: acpiphp: Slot [29] registered Aug 13 00:46:25.906941 kernel: acpiphp: Slot [30] registered Aug 13 00:46:25.906953 kernel: acpiphp: Slot [31] registered Aug 13 00:46:25.906966 kernel: PCI host bridge to bus 0000:00 Aug 13 00:46:25.907150 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:46:25.907247 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:46:25.907338 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:46:25.907434 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 00:46:25.908664 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 00:46:25.908806 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:46:25.908990 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Aug 13 00:46:25.909156 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Aug 13 00:46:25.909334 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Aug 13 00:46:25.910586 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Aug 13 00:46:25.910701 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Aug 13 00:46:25.910812 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Aug 13 00:46:25.910919 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Aug 13 00:46:25.911012 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Aug 13 00:46:25.911134 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Aug 13 00:46:25.911284 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Aug 13 00:46:25.912488 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Aug 13 00:46:25.912630 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 00:46:25.912770 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 00:46:25.912940 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Aug 13 00:46:25.913083 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Aug 13 00:46:25.913230 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 00:46:25.913401 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Aug 13 00:46:25.914494 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Aug 13 00:46:25.915578 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:46:25.915712 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:46:25.915810 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Aug 13 00:46:25.915903 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Aug 13 00:46:25.916003 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 00:46:25.916113 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 00:46:25.916224 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Aug 13 00:46:25.916337 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Aug 13 00:46:25.917337 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 00:46:25.917534 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:46:25.917638 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Aug 13 00:46:25.917876 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Aug 13 00:46:25.917979 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 00:46:25.918083 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:46:25.918185 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Aug 13 00:46:25.918316 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Aug 13 00:46:25.919511 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 00:46:25.919688 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Aug 13 00:46:25.919843 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Aug 13 00:46:25.919946 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Aug 13 00:46:25.920042 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 00:46:25.920155 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Aug 13 00:46:25.920253 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Aug 13 00:46:25.920356 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 00:46:25.920379 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:46:25.920392 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:46:25.920405 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:46:25.921466 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:46:25.921491 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:46:25.921501 kernel: iommu: Default domain type: Translated Aug 13 00:46:25.921511 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:46:25.921521 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:46:25.921530 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:46:25.921546 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:46:25.921555 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 00:46:25.921702 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 00:46:25.921860 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 00:46:25.921974 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:46:25.921987 kernel: vgaarb: loaded Aug 13 00:46:25.921997 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:46:25.922014 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:46:25.922029 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:46:25.922038 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:46:25.922048 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:46:25.922057 kernel: pnp: PnP ACPI init Aug 13 00:46:25.922066 kernel: pnp: PnP ACPI: found 4 devices Aug 13 00:46:25.922076 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:46:25.922085 kernel: NET: Registered PF_INET protocol family Aug 13 00:46:25.922094 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:46:25.922103 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:46:25.922116 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:46:25.922125 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:46:25.922161 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 00:46:25.922171 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:46:25.922179 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:46:25.922189 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:46:25.922198 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:46:25.922207 kernel: NET: Registered PF_XDP protocol family Aug 13 00:46:25.922309 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:46:25.922399 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:46:25.923574 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:46:25.923693 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 00:46:25.923781 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 00:46:25.923899 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 00:46:25.924001 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:46:25.924015 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 00:46:25.924111 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 26024 usecs Aug 13 00:46:25.924132 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:46:25.924142 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:46:25.924151 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Aug 13 00:46:25.924160 kernel: Initialise system trusted keyrings Aug 13 00:46:25.924170 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:46:25.924179 kernel: Key type asymmetric registered Aug 13 00:46:25.924188 kernel: Asymmetric key parser 'x509' registered Aug 13 00:46:25.924197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:46:25.924210 kernel: io scheduler mq-deadline registered Aug 13 00:46:25.924219 kernel: io scheduler kyber registered Aug 13 00:46:25.924229 kernel: io scheduler bfq registered Aug 13 00:46:25.924238 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:46:25.924247 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 00:46:25.924256 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 00:46:25.924265 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 00:46:25.924274 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:46:25.924283 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:46:25.924292 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:46:25.924305 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:46:25.924314 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:46:25.926508 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:46:25.926645 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:46:25.926735 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:46:25 UTC (1755045985) Aug 13 00:46:25.926833 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 00:46:25.926846 kernel: intel_pstate: CPU model not supported Aug 13 00:46:25.926863 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 13 00:46:25.926872 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:46:25.926882 kernel: Segment Routing with IPv6 Aug 13 00:46:25.926891 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:46:25.926900 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:46:25.926909 kernel: Key type dns_resolver registered Aug 13 00:46:25.926918 kernel: IPI shorthand broadcast: enabled Aug 13 00:46:25.926927 kernel: sched_clock: Marking stable (3373006065, 92467511)->(3485787156, -20313580) Aug 13 00:46:25.926937 kernel: registered taskstats version 1 Aug 13 00:46:25.926949 kernel: Loading compiled-in X.509 certificates Aug 13 00:46:25.926958 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 00:46:25.926970 kernel: Demotion targets for Node 0: null Aug 13 00:46:25.926983 kernel: Key type .fscrypt registered Aug 13 00:46:25.926997 kernel: Key type fscrypt-provisioning registered Aug 13 00:46:25.927014 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:46:25.927047 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:46:25.927059 kernel: ima: No architecture policies found Aug 13 00:46:25.927069 kernel: clk: Disabling unused clocks Aug 13 00:46:25.927081 kernel: Warning: unable to open an initial console. Aug 13 00:46:25.927091 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 00:46:25.927102 kernel: Write protecting the kernel read-only data: 24576k Aug 13 00:46:25.927111 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 00:46:25.927124 kernel: Run /init as init process Aug 13 00:46:25.927138 kernel: with arguments: Aug 13 00:46:25.927148 kernel: /init Aug 13 00:46:25.927158 kernel: with environment: Aug 13 00:46:25.927167 kernel: HOME=/ Aug 13 00:46:25.927180 kernel: TERM=linux Aug 13 00:46:25.927189 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:46:25.927200 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:46:25.927214 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:25.927225 systemd[1]: Detected virtualization kvm. Aug 13 00:46:25.927234 systemd[1]: Detected architecture x86-64. Aug 13 00:46:25.927244 systemd[1]: Running in initrd. Aug 13 00:46:25.927253 systemd[1]: No hostname configured, using default hostname. Aug 13 00:46:25.927266 systemd[1]: Hostname set to . Aug 13 00:46:25.927276 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:46:25.927286 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:46:25.927296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:25.927306 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:25.927318 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:46:25.927328 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:25.927341 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:46:25.927351 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:46:25.927363 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:46:25.927375 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:46:25.927388 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:25.927398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:25.927408 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:25.927440 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:25.927451 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:25.927460 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:25.927470 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:25.927480 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:25.927490 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:46:25.927504 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:46:25.927514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:25.927524 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:25.927534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:25.927544 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:25.927553 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:46:25.927563 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:25.927575 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:46:25.927593 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 00:46:25.927607 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:46:25.927626 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:25.927646 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:25.927665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:25.927684 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:25.927710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:25.927786 systemd-journald[212]: Collecting audit messages is disabled. Aug 13 00:46:25.927834 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:46:25.927854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:46:25.927875 systemd-journald[212]: Journal started Aug 13 00:46:25.927911 systemd-journald[212]: Runtime Journal (/run/log/journal/6762c5604505440c86587fb7fa48281a) is 4.9M, max 39.5M, 34.6M free. Aug 13 00:46:25.930321 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:25.936924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:25.937962 systemd-modules-load[214]: Inserted module 'overlay' Aug 13 00:46:25.962428 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:25.973902 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:46:25.976928 systemd-modules-load[214]: Inserted module 'br_netfilter' Aug 13 00:46:25.978075 kernel: Bridge firewalling registered Aug 13 00:46:25.978465 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:25.979749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:25.983172 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:25.989074 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 00:46:25.991181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:46:25.996616 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:25.997965 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:26.010666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:26.020836 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:26.025252 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:26.031484 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:26.034578 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:46:26.058451 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 00:46:26.075870 systemd-resolved[248]: Positive Trust Anchors: Aug 13 00:46:26.076481 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:26.076523 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:26.081985 systemd-resolved[248]: Defaulting to hostname 'linux'. Aug 13 00:46:26.083168 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:26.083613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:26.169463 kernel: SCSI subsystem initialized Aug 13 00:46:26.179486 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:46:26.192504 kernel: iscsi: registered transport (tcp) Aug 13 00:46:26.218547 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:46:26.218644 kernel: QLogic iSCSI HBA Driver Aug 13 00:46:26.243376 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:46:26.274464 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:26.277094 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:46:26.343963 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:26.346484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:46:26.412528 kernel: raid6: avx2x4 gen() 14361 MB/s Aug 13 00:46:26.429490 kernel: raid6: avx2x2 gen() 14852 MB/s Aug 13 00:46:26.446685 kernel: raid6: avx2x1 gen() 11348 MB/s Aug 13 00:46:26.446788 kernel: raid6: using algorithm avx2x2 gen() 14852 MB/s Aug 13 00:46:26.464747 kernel: raid6: .... xor() 14376 MB/s, rmw enabled Aug 13 00:46:26.464858 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:46:26.494476 kernel: xor: automatically using best checksumming function avx Aug 13 00:46:26.742490 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:46:26.753192 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:26.755721 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:26.800459 systemd-udevd[462]: Using default interface naming scheme 'v255'. Aug 13 00:46:26.810774 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:26.814244 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:46:26.845228 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Aug 13 00:46:26.885273 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:26.889267 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:26.983641 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:26.989510 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:46:27.070467 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Aug 13 00:46:27.076738 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 00:46:27.097976 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:46:27.098040 kernel: GPT:9289727 != 125829119 Aug 13 00:46:27.098053 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:46:27.098065 kernel: GPT:9289727 != 125829119 Aug 13 00:46:27.098539 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:46:27.099610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:27.113463 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Aug 13 00:46:27.118661 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:46:27.124604 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:46:27.124675 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:46:27.130725 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Aug 13 00:46:27.130958 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Aug 13 00:46:27.143451 kernel: AES CTR mode by8 optimization enabled Aug 13 00:46:27.176539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:27.176688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:27.178865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:27.184702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:27.186320 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:27.205446 kernel: libata version 3.00 loaded. Aug 13 00:46:27.216452 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 00:46:27.232467 kernel: scsi host1: ata_piix Aug 13 00:46:27.240928 kernel: scsi host2: ata_piix Aug 13 00:46:27.241296 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Aug 13 00:46:27.241319 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Aug 13 00:46:27.243458 kernel: ACPI: bus type USB registered Aug 13 00:46:27.243542 kernel: usbcore: registered new interface driver usbfs Aug 13 00:46:27.243588 kernel: usbcore: registered new interface driver hub Aug 13 00:46:27.243616 kernel: usbcore: registered new device driver usb Aug 13 00:46:27.294788 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:46:27.295855 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:27.319336 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:46:27.330145 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:46:27.330653 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:46:27.341855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:46:27.343679 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:46:27.375558 disk-uuid[609]: Primary Header is updated. Aug 13 00:46:27.375558 disk-uuid[609]: Secondary Entries is updated. Aug 13 00:46:27.375558 disk-uuid[609]: Secondary Header is updated. Aug 13 00:46:27.386455 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:27.406477 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:27.440561 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 00:46:27.440824 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 00:46:27.440948 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 00:46:27.441500 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Aug 13 00:46:27.442448 kernel: hub 1-0:1.0: USB hub found Aug 13 00:46:27.443438 kernel: hub 1-0:1.0: 2 ports detected Aug 13 00:46:27.562590 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:27.580467 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:27.581156 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:27.581987 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:27.583849 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:46:27.615705 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:28.394552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:46:28.395879 disk-uuid[610]: The operation has completed successfully. Aug 13 00:46:28.446672 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:46:28.446809 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:46:28.484478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:46:28.497926 sh[634]: Success Aug 13 00:46:28.524756 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:46:28.524903 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:46:28.526017 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 00:46:28.539462 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Aug 13 00:46:28.599564 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:46:28.604544 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:46:28.614672 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:46:28.627622 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 00:46:28.627699 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (646) Aug 13 00:46:28.630991 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 00:46:28.631060 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:28.631074 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 00:46:28.639165 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:46:28.640192 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:46:28.640769 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:46:28.641703 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:46:28.646495 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:46:28.680452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (678) Aug 13 00:46:28.683914 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:28.684000 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:28.684021 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:46:28.693435 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:28.694685 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:46:28.697652 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:46:28.787223 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:28.789389 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:28.844658 systemd-networkd[817]: lo: Link UP Aug 13 00:46:28.845275 systemd-networkd[817]: lo: Gained carrier Aug 13 00:46:28.848517 systemd-networkd[817]: Enumeration completed Aug 13 00:46:28.849846 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:28.850349 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 00:46:28.850354 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 00:46:28.851655 systemd[1]: Reached target network.target - Network. Aug 13 00:46:28.852205 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:28.852210 systemd-networkd[817]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:46:28.852587 systemd-networkd[817]: eth0: Link UP Aug 13 00:46:28.852745 systemd-networkd[817]: eth1: Link UP Aug 13 00:46:28.852880 systemd-networkd[817]: eth0: Gained carrier Aug 13 00:46:28.852891 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Aug 13 00:46:28.856957 systemd-networkd[817]: eth1: Gained carrier Aug 13 00:46:28.856975 systemd-networkd[817]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:46:28.867534 systemd-networkd[817]: eth0: DHCPv4 address 134.199.224.26/20, gateway 134.199.224.1 acquired from 169.254.169.253 Aug 13 00:46:28.882539 systemd-networkd[817]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Aug 13 00:46:28.890758 ignition[723]: Ignition 2.21.0 Aug 13 00:46:28.891415 ignition[723]: Stage: fetch-offline Aug 13 00:46:28.891497 ignition[723]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:28.891507 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:46:28.891618 ignition[723]: parsed url from cmdline: "" Aug 13 00:46:28.891624 ignition[723]: no config URL provided Aug 13 00:46:28.891630 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:46:28.893863 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:28.891637 ignition[723]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:46:28.891643 ignition[723]: failed to fetch config: resource requires networking Aug 13 00:46:28.891859 ignition[723]: Ignition finished successfully Aug 13 00:46:28.897634 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:46:28.939206 ignition[828]: Ignition 2.21.0 Aug 13 00:46:28.939823 ignition[828]: Stage: fetch Aug 13 00:46:28.939985 ignition[828]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:28.939996 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:46:28.940118 ignition[828]: parsed url from cmdline: "" Aug 13 00:46:28.940122 ignition[828]: no config URL provided Aug 13 00:46:28.940127 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:46:28.940135 ignition[828]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:46:28.940167 ignition[828]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 00:46:28.956781 ignition[828]: GET result: OK Aug 13 00:46:28.956974 ignition[828]: parsing config with SHA512: 6f9a11cd0f6faeb6dfb2ab2e908ea5b35bf137ab046e7c9e5b1b98e2ddfb49d178df8dcd98dc7e8aa38bf252976338981668610ddef051a681b6e66758e1ebd1 Aug 13 00:46:28.961467 unknown[828]: fetched base config from "system" Aug 13 00:46:28.961478 unknown[828]: fetched base config from "system" Aug 13 00:46:28.961891 ignition[828]: fetch: fetch complete Aug 13 00:46:28.961485 unknown[828]: fetched user config from "digitalocean" Aug 13 00:46:28.961897 ignition[828]: fetch: fetch passed Aug 13 00:46:28.961953 ignition[828]: Ignition finished successfully Aug 13 00:46:28.964925 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:46:28.966811 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:46:29.001231 ignition[834]: Ignition 2.21.0 Aug 13 00:46:29.001248 ignition[834]: Stage: kargs Aug 13 00:46:29.001518 ignition[834]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:29.001535 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:46:29.002971 ignition[834]: kargs: kargs passed Aug 13 00:46:29.003047 ignition[834]: Ignition finished successfully Aug 13 00:46:29.004960 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:46:29.007365 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:46:29.041204 ignition[841]: Ignition 2.21.0 Aug 13 00:46:29.041221 ignition[841]: Stage: disks Aug 13 00:46:29.041477 ignition[841]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:29.041493 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:46:29.043509 ignition[841]: disks: disks passed Aug 13 00:46:29.044133 ignition[841]: Ignition finished successfully Aug 13 00:46:29.046069 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:46:29.046800 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:46:29.047219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:46:29.047986 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:46:29.048715 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:46:29.049367 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:46:29.051429 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:46:29.104043 systemd-fsck[849]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 00:46:29.109819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:46:29.112594 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:46:29.241449 kernel: EXT4-fs (vda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 00:46:29.242801 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:46:29.244130 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:46:29.246889 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:46:29.250563 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:46:29.258614 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Aug 13 00:46:29.262572 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:46:29.264188 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:46:29.265285 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:46:29.268367 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:46:29.276452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (857) Aug 13 00:46:29.280684 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:29.284541 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:29.284630 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:46:29.285702 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:46:29.305631 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:46:29.357652 coreos-metadata[859]: Aug 13 00:46:29.357 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:46:29.371265 coreos-metadata[859]: Aug 13 00:46:29.370 INFO Fetch successful Aug 13 00:46:29.372617 initrd-setup-root[887]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:46:29.373515 coreos-metadata[860]: Aug 13 00:46:29.373 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:46:29.380695 initrd-setup-root[894]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:46:29.381600 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Aug 13 00:46:29.381864 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Aug 13 00:46:29.386071 coreos-metadata[860]: Aug 13 00:46:29.385 INFO Fetch successful Aug 13 00:46:29.389648 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:46:29.392452 coreos-metadata[860]: Aug 13 00:46:29.392 INFO wrote hostname ci-4372.1.0-a-508df13d84 to /sysroot/etc/hostname Aug 13 00:46:29.393632 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:46:29.398457 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:46:29.520837 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:46:29.523565 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:46:29.524997 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:46:29.549483 kernel: BTRFS info (device vda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:29.571830 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:46:29.582327 ignition[979]: INFO : Ignition 2.21.0 Aug 13 00:46:29.584443 ignition[979]: INFO : Stage: mount Aug 13 00:46:29.584443 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:29.584443 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:46:29.587457 ignition[979]: INFO : mount: mount passed Aug 13 00:46:29.587457 ignition[979]: INFO : Ignition finished successfully Aug 13 00:46:29.590163 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:46:29.591748 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:46:29.628622 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:46:29.631070 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:46:29.660486 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (990) Aug 13 00:46:29.660572 kernel: BTRFS info (device vda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 00:46:29.662913 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:46:29.662984 kernel: BTRFS info (device vda6): using free-space-tree Aug 13 00:46:29.670330 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:46:29.708529 ignition[1006]: INFO : Ignition 2.21.0 Aug 13 00:46:29.708529 ignition[1006]: INFO : Stage: files Aug 13 00:46:29.711070 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:29.711070 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:46:29.714538 ignition[1006]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:46:29.716710 ignition[1006]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:46:29.716710 ignition[1006]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:46:29.720582 ignition[1006]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:46:29.721471 ignition[1006]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:46:29.722918 unknown[1006]: wrote ssh authorized keys file for user: core Aug 13 00:46:29.723727 ignition[1006]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:46:29.725432 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:46:29.726455 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 00:46:29.770104 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:46:29.935697 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 00:46:29.935697 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:46:29.937631 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:46:29.937631 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:46:29.937631 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:46:29.937631 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:46:29.937631 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:46:29.937631 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:46:29.937631 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:46:29.947476 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:46:29.947476 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:46:29.947476 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:29.947476 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:29.947476 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:29.947476 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 00:46:30.009591 systemd-networkd[817]: eth0: Gained IPv6LL Aug 13 00:46:30.268349 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:46:30.605968 ignition[1006]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 00:46:30.607135 ignition[1006]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:46:30.607787 ignition[1006]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:46:30.609399 ignition[1006]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:46:30.609399 ignition[1006]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:46:30.609399 ignition[1006]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:46:30.611430 ignition[1006]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:46:30.611430 ignition[1006]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:46:30.611430 ignition[1006]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:46:30.611430 ignition[1006]: INFO : files: files passed Aug 13 00:46:30.611430 ignition[1006]: INFO : Ignition finished successfully Aug 13 00:46:30.611773 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:46:30.615408 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:46:30.618643 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:46:30.639019 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:46:30.639158 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:46:30.649239 initrd-setup-root-after-ignition[1037]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:30.649239 initrd-setup-root-after-ignition[1037]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:30.651854 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:46:30.653403 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:46:30.654972 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:46:30.656245 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:46:30.708880 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:46:30.709028 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:46:30.710338 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:46:30.710968 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:46:30.711893 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:46:30.713168 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:46:30.742013 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:46:30.744449 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:46:30.770518 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:30.771644 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:30.772152 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:46:30.772716 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:46:30.772913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:46:30.774304 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:46:30.775254 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:46:30.776228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:46:30.776982 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:46:30.777953 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:46:30.778760 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 00:46:30.779818 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:46:30.780718 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:46:30.781745 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:46:30.782607 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:46:30.783573 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:46:30.784353 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:46:30.784559 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:46:30.785597 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:30.786617 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:30.787395 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:46:30.787568 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:30.788306 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:46:30.788511 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:46:30.789700 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:46:30.789954 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:46:30.791009 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:46:30.791220 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:46:30.791885 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:46:30.792084 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:46:30.794522 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:46:30.797789 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:46:30.799541 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:46:30.799853 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:30.801818 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:46:30.801996 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:46:30.809800 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:46:30.809961 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:46:30.841552 systemd-networkd[817]: eth1: Gained IPv6LL Aug 13 00:46:30.842502 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:46:30.847973 ignition[1061]: INFO : Ignition 2.21.0 Aug 13 00:46:30.847973 ignition[1061]: INFO : Stage: umount Aug 13 00:46:30.850813 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:46:30.850813 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:46:30.850813 ignition[1061]: INFO : umount: umount passed Aug 13 00:46:30.850813 ignition[1061]: INFO : Ignition finished successfully Aug 13 00:46:30.851770 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:46:30.851920 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:46:30.852872 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:46:30.853016 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:46:30.855285 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:46:30.855744 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:46:30.856652 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:46:30.856724 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:46:30.857364 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:46:30.857450 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:46:30.858118 systemd[1]: Stopped target network.target - Network. Aug 13 00:46:30.858918 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:46:30.858995 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:46:30.859721 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:46:30.860378 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:46:30.860537 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:30.861183 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:46:30.862021 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:46:30.862952 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:46:30.863016 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:46:30.864112 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:46:30.864175 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:46:30.864978 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:46:30.865077 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:46:30.866015 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:46:30.866095 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:46:30.866995 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:46:30.867082 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:46:30.868089 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:46:30.868768 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:46:30.876809 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:46:30.876985 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:46:30.881069 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:46:30.883003 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:46:30.883088 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:30.885329 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:30.887175 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:46:30.887335 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:46:30.889584 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:46:30.890682 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 00:46:30.891136 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:46:30.891181 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:30.893082 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:46:30.894626 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:46:30.894709 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:46:30.895513 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:46:30.895575 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:30.898577 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:46:30.898649 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:30.899111 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:30.900749 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:46:30.924336 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:46:30.925334 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:30.927156 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:46:30.927988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:46:30.929689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:46:30.929801 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:30.930787 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:46:30.930825 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:30.931562 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:46:30.931618 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:46:30.932760 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:46:30.932807 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:46:30.933498 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:46:30.933545 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:46:30.935276 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:46:30.937028 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 00:46:30.937088 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:30.940134 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:46:30.940213 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:30.940998 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:46:30.941092 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:30.942225 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:46:30.942291 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:30.943052 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:30.943116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:30.955307 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:46:30.955492 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:46:30.956750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:46:30.959638 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:46:30.981059 systemd[1]: Switching root. Aug 13 00:46:31.016399 systemd-journald[212]: Journal stopped Aug 13 00:46:32.277410 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Aug 13 00:46:32.277550 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:46:32.277571 kernel: SELinux: policy capability open_perms=1 Aug 13 00:46:32.277598 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:46:32.277614 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:46:32.277650 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:46:32.277666 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:46:32.277683 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:46:32.277718 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:46:32.277740 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 00:46:32.277762 kernel: audit: type=1403 audit(1755045991.185:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:46:32.277783 systemd[1]: Successfully loaded SELinux policy in 39.139ms. Aug 13 00:46:32.277818 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.582ms. Aug 13 00:46:32.277840 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:46:32.277858 systemd[1]: Detected virtualization kvm. Aug 13 00:46:32.277875 systemd[1]: Detected architecture x86-64. Aug 13 00:46:32.277888 systemd[1]: Detected first boot. Aug 13 00:46:32.277905 systemd[1]: Hostname set to . Aug 13 00:46:32.277923 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:46:32.277937 zram_generator::config[1106]: No configuration found. Aug 13 00:46:32.277951 kernel: Guest personality initialized and is inactive Aug 13 00:46:32.277968 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:46:32.277985 kernel: Initialized host personality Aug 13 00:46:32.278001 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:46:32.278020 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:46:32.278040 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:46:32.278065 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:46:32.278084 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:46:32.278104 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:46:32.278126 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:46:32.278147 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:46:32.278162 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:46:32.279882 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:46:32.279938 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:46:32.279962 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:46:32.279982 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:46:32.280000 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:46:32.280018 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:46:32.280037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:46:32.280055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:46:32.280073 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:46:32.280096 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:46:32.280114 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:46:32.280130 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:46:32.280147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:46:32.280167 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:46:32.280184 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:46:32.280240 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:46:32.280258 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:46:32.280280 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:46:32.280296 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:46:32.280315 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:46:32.280332 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:46:32.280349 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:46:32.280365 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:46:32.280382 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:46:32.280398 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:46:32.280436 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:46:32.280463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:46:32.280481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:46:32.280499 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:46:32.280518 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:46:32.280533 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:46:32.280546 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:46:32.280558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:32.280571 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:46:32.280584 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:46:32.280600 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:46:32.280613 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:46:32.280626 systemd[1]: Reached target machines.target - Containers. Aug 13 00:46:32.280641 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:46:32.280654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:32.280666 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:46:32.280679 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:46:32.280691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:46:32.280704 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:46:32.280720 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:46:32.280733 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:46:32.280745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:46:32.280759 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:46:32.280772 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:46:32.280785 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:46:32.280806 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:46:32.280842 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:46:32.280879 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:32.280897 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:46:32.280915 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:46:32.280936 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:46:32.280954 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:46:32.280975 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:46:32.280994 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:46:32.281014 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:46:32.281028 systemd[1]: Stopped verity-setup.service. Aug 13 00:46:32.281043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:32.281059 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:46:32.281072 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:46:32.281084 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:46:32.281097 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:46:32.281110 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:46:32.281123 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:46:32.281136 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:46:32.281148 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:46:32.281160 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:46:32.281176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:46:32.281188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:46:32.281200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:46:32.281213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:46:32.281226 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:46:32.281239 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:46:32.281251 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:46:32.281264 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:46:32.281279 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:46:32.281292 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:46:32.281354 systemd-journald[1174]: Collecting audit messages is disabled. Aug 13 00:46:32.281382 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:46:32.281395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:32.281409 systemd-journald[1174]: Journal started Aug 13 00:46:32.281875 systemd-journald[1174]: Runtime Journal (/run/log/journal/6762c5604505440c86587fb7fa48281a) is 4.9M, max 39.5M, 34.6M free. Aug 13 00:46:31.955781 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:46:31.980287 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:46:31.980763 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:46:32.290445 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:46:32.294476 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:46:32.301459 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:46:32.303452 kernel: loop: module loaded Aug 13 00:46:32.307454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:46:32.318464 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:46:32.329449 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:46:32.331452 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:46:32.336802 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:46:32.338679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:46:32.349209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:46:32.352681 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:46:32.354882 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:46:32.393376 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:46:32.408443 kernel: fuse: init (API version 7.41) Aug 13 00:46:32.412670 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:46:32.413359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:46:32.414087 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:46:32.415529 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:46:32.441452 kernel: loop0: detected capacity change from 0 to 146240 Aug 13 00:46:32.496925 systemd-journald[1174]: Time spent on flushing to /var/log/journal/6762c5604505440c86587fb7fa48281a is 97.519ms for 1002 entries. Aug 13 00:46:32.496925 systemd-journald[1174]: System Journal (/var/log/journal/6762c5604505440c86587fb7fa48281a) is 8M, max 195.6M, 187.6M free. Aug 13 00:46:32.600796 systemd-journald[1174]: Received client request to flush runtime journal. Aug 13 00:46:32.600869 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:46:32.600895 kernel: loop1: detected capacity change from 0 to 8 Aug 13 00:46:32.600917 kernel: ACPI: bus type drm_connector registered Aug 13 00:46:32.497653 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:46:32.498600 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:46:32.503664 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:46:32.569173 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:46:32.576566 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:46:32.590571 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:46:32.591562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:46:32.611453 kernel: loop2: detected capacity change from 0 to 113872 Aug 13 00:46:32.609031 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:46:32.613131 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:46:32.640980 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Aug 13 00:46:32.640996 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Aug 13 00:46:32.646238 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 00:46:32.644619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:46:32.656531 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:46:32.661768 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:46:32.751118 kernel: loop4: detected capacity change from 0 to 146240 Aug 13 00:46:32.784015 kernel: loop5: detected capacity change from 0 to 8 Aug 13 00:46:32.784102 kernel: loop6: detected capacity change from 0 to 113872 Aug 13 00:46:32.783005 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:46:32.789691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:46:32.803435 kernel: loop7: detected capacity change from 0 to 221472 Aug 13 00:46:32.839388 (sd-merge)[1250]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Aug 13 00:46:32.840355 (sd-merge)[1250]: Merged extensions into '/usr'. Aug 13 00:46:32.853350 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:46:32.853457 systemd[1]: Reloading... Aug 13 00:46:32.884642 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Aug 13 00:46:32.884675 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Aug 13 00:46:33.004448 zram_generator::config[1281]: No configuration found. Aug 13 00:46:33.142748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:33.236318 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:46:33.245144 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:46:33.245339 systemd[1]: Reloading finished in 391 ms. Aug 13 00:46:33.258530 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:46:33.259632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:46:33.262912 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:46:33.267688 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:46:33.275775 systemd[1]: Starting ensure-sysext.service... Aug 13 00:46:33.280698 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:46:33.298119 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:46:33.324085 systemd[1]: Reload requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:46:33.324105 systemd[1]: Reloading... Aug 13 00:46:33.362140 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 00:46:33.362583 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 00:46:33.362956 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:46:33.363370 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:46:33.364502 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:46:33.364992 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Aug 13 00:46:33.365298 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Aug 13 00:46:33.369926 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:46:33.371603 systemd-tmpfiles[1327]: Skipping /boot Aug 13 00:46:33.400127 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:46:33.400363 systemd-tmpfiles[1327]: Skipping /boot Aug 13 00:46:33.486465 zram_generator::config[1364]: No configuration found. Aug 13 00:46:33.596673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:33.698379 systemd[1]: Reloading finished in 373 ms. Aug 13 00:46:33.722219 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:46:33.729653 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:46:33.741666 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:46:33.745836 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:46:33.759638 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:46:33.763729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:46:33.766971 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:46:33.768540 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:46:33.777004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:33.777271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:33.780376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:46:33.788370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:46:33.794875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:46:33.795398 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:33.795603 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:33.795704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:33.806893 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:46:33.811714 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:33.812007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:33.812244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:33.812337 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:33.813577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:33.819378 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:33.820329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:46:33.827738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:46:33.829655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:46:33.829833 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:46:33.829979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:46:33.837482 systemd[1]: Finished ensure-sysext.service. Aug 13 00:46:33.838384 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:46:33.839026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:46:33.839810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:46:33.845488 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:46:33.845813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:46:33.856197 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:46:33.860676 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:46:33.861467 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:46:33.864131 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:46:33.868303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:46:33.868554 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:46:33.869821 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:46:33.887082 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:46:33.887295 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:46:33.894061 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:46:33.902203 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:46:33.920061 systemd-udevd[1404]: Using default interface naming scheme 'v255'. Aug 13 00:46:33.946885 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:46:33.958772 augenrules[1441]: No rules Aug 13 00:46:33.959203 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:46:33.961283 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:46:33.962552 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:46:33.971682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:46:33.975677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:46:34.100024 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:46:34.100752 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:46:34.143172 systemd-networkd[1452]: lo: Link UP Aug 13 00:46:34.143183 systemd-networkd[1452]: lo: Gained carrier Aug 13 00:46:34.144168 systemd-networkd[1452]: Enumeration completed Aug 13 00:46:34.144303 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:46:34.146626 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:46:34.149831 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:46:34.165547 systemd-resolved[1403]: Positive Trust Anchors: Aug 13 00:46:34.165564 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:46:34.165605 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:46:34.175976 systemd-resolved[1403]: Using system hostname 'ci-4372.1.0-a-508df13d84'. Aug 13 00:46:34.179397 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:46:34.180583 systemd[1]: Reached target network.target - Network. Aug 13 00:46:34.181607 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:46:34.182081 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:46:34.183629 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:46:34.184169 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:46:34.184620 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 00:46:34.185219 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:46:34.186636 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:46:34.187104 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:46:34.188530 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:46:34.188577 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:46:34.188962 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:46:34.190876 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:46:34.194020 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:46:34.202503 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:46:34.204710 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:46:34.206529 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:46:34.218669 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:46:34.219935 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:46:34.222683 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:46:34.223500 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:46:34.228258 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:46:34.228978 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:46:34.229384 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:46:34.229411 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:46:34.232542 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:46:34.235734 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:46:34.239700 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:46:34.242903 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:46:34.249687 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:46:34.256138 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:46:34.256590 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:46:34.260444 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 00:46:34.266237 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:46:34.270967 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:46:34.276695 jq[1490]: false Aug 13 00:46:34.277715 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:46:34.281534 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:46:34.293412 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:46:34.294791 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:46:34.295360 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:46:34.306494 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:46:34.310265 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:46:34.318243 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:46:34.319817 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:46:34.320030 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:46:34.356782 extend-filesystems[1491]: Found /dev/vda6 Aug 13 00:46:34.362404 oslogin_cache_refresh[1492]: Refreshing passwd entry cache Aug 13 00:46:34.364823 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing passwd entry cache Aug 13 00:46:34.361663 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:46:34.364688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:46:34.367688 tar[1510]: linux-amd64/helm Aug 13 00:46:34.372485 extend-filesystems[1491]: Found /dev/vda9 Aug 13 00:46:34.378033 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:46:34.378308 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:46:34.393640 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting users, quitting Aug 13 00:46:34.393640 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:46:34.393640 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing group entry cache Aug 13 00:46:34.392048 oslogin_cache_refresh[1492]: Failure getting users, quitting Aug 13 00:46:34.392076 oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 00:46:34.392141 oslogin_cache_refresh[1492]: Refreshing group entry cache Aug 13 00:46:34.400577 extend-filesystems[1491]: Checking size of /dev/vda9 Aug 13 00:46:34.403914 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting groups, quitting Aug 13 00:46:34.403914 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:46:34.402445 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:46:34.402404 oslogin_cache_refresh[1492]: Failure getting groups, quitting Aug 13 00:46:34.411950 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 00:46:34.402430 oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 00:46:34.412405 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 00:46:34.417008 jq[1502]: true Aug 13 00:46:34.465853 coreos-metadata[1487]: Aug 13 00:46:34.462 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:46:34.465853 coreos-metadata[1487]: Aug 13 00:46:34.462 INFO Failed to fetch: error sending request for url (http://169.254.169.254/metadata/v1.json) Aug 13 00:46:34.466332 extend-filesystems[1491]: Resized partition /dev/vda9 Aug 13 00:46:34.470618 jq[1531]: true Aug 13 00:46:34.470801 extend-filesystems[1536]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 00:46:34.489637 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 00:46:34.480415 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:46:34.478331 dbus-daemon[1488]: [system] SELinux support is enabled Aug 13 00:46:34.490272 update_engine[1500]: I20250813 00:46:34.475696 1500 main.cc:92] Flatcar Update Engine starting Aug 13 00:46:34.495811 update_engine[1500]: I20250813 00:46:34.494987 1500 update_check_scheduler.cc:74] Next update check in 7m48s Aug 13 00:46:34.495158 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:46:34.495197 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:46:34.497840 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:46:34.497879 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:46:34.498435 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:46:34.505494 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:46:34.660262 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Aug 13 00:46:34.665487 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Aug 13 00:46:34.665933 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:46:34.695468 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 00:46:34.723966 systemd-logind[1499]: New seat seat0. Aug 13 00:46:34.724521 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 00:46:34.725005 bash[1557]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:46:34.727528 extend-filesystems[1536]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:46:34.727528 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 00:46:34.727528 extend-filesystems[1536]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 00:46:34.739367 extend-filesystems[1491]: Resized filesystem in /dev/vda9 Aug 13 00:46:34.727945 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:46:34.737753 systemd-networkd[1452]: eth1: Configuring with /run/systemd/network/10-22:ce:21:dc:48:c0.network. Aug 13 00:46:34.739347 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:46:34.739612 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:46:34.741385 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:46:34.742389 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Aug 13 00:46:34.749975 systemd-networkd[1452]: eth0: Configuring with /run/systemd/network/10-2a:7a:ef:1d:16:a1.network. Aug 13 00:46:34.754125 systemd-networkd[1452]: eth1: Link UP Aug 13 00:46:34.757078 systemd[1]: Starting sshkeys.service... Aug 13 00:46:34.757519 systemd-networkd[1452]: eth1: Gained carrier Aug 13 00:46:34.758164 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Aug 13 00:46:34.763786 systemd-networkd[1452]: eth0: Link UP Aug 13 00:46:34.764514 systemd-networkd[1452]: eth0: Gained carrier Aug 13 00:46:34.771038 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Aug 13 00:46:34.772527 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Aug 13 00:46:34.810819 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:46:34.822187 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:46:34.825807 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:46:34.887508 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:46:34.964200 coreos-metadata[1578]: Aug 13 00:46:34.961 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:46:34.978187 coreos-metadata[1578]: Aug 13 00:46:34.977 INFO Fetch successful Aug 13 00:46:34.994268 unknown[1578]: wrote ssh authorized keys file for user: core Aug 13 00:46:35.010027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:46:35.018878 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:46:35.058297 sshd_keygen[1525]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:46:35.079583 update-ssh-keys[1584]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:46:35.080922 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:46:35.086522 systemd[1]: Finished sshkeys.service. Aug 13 00:46:35.108451 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:46:35.175041 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:46:35.182636 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:46:35.190455 containerd[1524]: time="2025-08-13T00:46:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 00:46:35.191232 containerd[1524]: time="2025-08-13T00:46:35.190878963Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 00:46:35.205458 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:46:35.221583 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 13 00:46:35.224518 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 00:46:35.224857 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:46:35.237455 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:46:35.239921 containerd[1524]: time="2025-08-13T00:46:35.238850588Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.573µs" Aug 13 00:46:35.242480 containerd[1524]: time="2025-08-13T00:46:35.242413734Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.242596987Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.242770167Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.242786338Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.242812027Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.242864213Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.242875414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.243143870Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.243162030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.243174938Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.243184528Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 00:46:35.243789 containerd[1524]: time="2025-08-13T00:46:35.243330067Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 00:46:35.242964 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:46:35.243403 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:46:35.247731 containerd[1524]: time="2025-08-13T00:46:35.247160540Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:46:35.247731 containerd[1524]: time="2025-08-13T00:46:35.247227543Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 00:46:35.247731 containerd[1524]: time="2025-08-13T00:46:35.247242410Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 00:46:35.247731 containerd[1524]: time="2025-08-13T00:46:35.247287052Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 00:46:35.248750 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:46:35.249955 containerd[1524]: time="2025-08-13T00:46:35.249080667Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 00:46:35.249955 containerd[1524]: time="2025-08-13T00:46:35.249216338Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255126290Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255197749Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255237348Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255253614Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255267224Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255280721Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255293675Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255305509Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255315954Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255326012Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255334400Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 00:46:35.255549 containerd[1524]: time="2025-08-13T00:46:35.255347230Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.256973000Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257022441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257043997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257083147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257101769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257116027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257127956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257140139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257152637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257163413Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257173202Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257258805Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257282382Z" level=info msg="Start snapshots syncer" Aug 13 00:46:35.257857 containerd[1524]: time="2025-08-13T00:46:35.257318773Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 00:46:35.262217 containerd[1524]: time="2025-08-13T00:46:35.260265669Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 00:46:35.262217 containerd[1524]: time="2025-08-13T00:46:35.260365980Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260500247Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260652228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260674757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260686063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260698694Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260713670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260724515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260735385Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260777768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260794937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260805768Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260847363Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260865246Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 00:46:35.262524 containerd[1524]: time="2025-08-13T00:46:35.260874296Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260883496Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260891055Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260900200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260910353Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260923687Z" level=info msg="runtime interface created" Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260928613Z" level=info msg="created NRI interface" Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260939817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260954281Z" level=info msg="Connect containerd service" Aug 13 00:46:35.262875 containerd[1524]: time="2025-08-13T00:46:35.260978203Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:46:35.271571 containerd[1524]: time="2025-08-13T00:46:35.270986286Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:46:35.321329 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:46:35.325925 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:46:35.330938 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:46:35.332453 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:46:35.463077 coreos-metadata[1487]: Aug 13 00:46:35.463 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #2 Aug 13 00:46:35.480132 coreos-metadata[1487]: Aug 13 00:46:35.480 INFO Fetch successful Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.579948457Z" level=info msg="Start subscribing containerd event" Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.580016352Z" level=info msg="Start recovering state" Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.580126312Z" level=info msg="Start event monitor" Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.580142533Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.580150856Z" level=info msg="Start streaming server" Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.580158514Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.580166445Z" level=info msg="runtime interface starting up..." Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.580172546Z" level=info msg="starting plugins..." Aug 13 00:46:35.580309 containerd[1524]: time="2025-08-13T00:46:35.580185237Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 00:46:35.582056 containerd[1524]: time="2025-08-13T00:46:35.582016322Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:46:35.584937 containerd[1524]: time="2025-08-13T00:46:35.584865039Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:46:35.593658 containerd[1524]: time="2025-08-13T00:46:35.592164845Z" level=info msg="containerd successfully booted in 0.402951s" Aug 13 00:46:35.592272 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:46:35.609372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:35.634101 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:46:35.640166 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:46:35.785546 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Aug 13 00:46:35.787459 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Aug 13 00:46:35.792668 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:46:35.792746 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Aug 13 00:46:35.792763 kernel: [drm] features: -context_init Aug 13 00:46:35.796452 kernel: [drm] number of scanouts: 1 Aug 13 00:46:35.796522 kernel: [drm] number of cap sets: 0 Aug 13 00:46:35.799200 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Aug 13 00:46:35.898250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:35.931521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:46:35.931698 systemd-logind[1499]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:46:35.932043 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:35.932640 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:35.935886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:46:35.939553 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:46:35.960458 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:46:35.972335 systemd-logind[1499]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 00:46:36.043521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:46:36.099042 tar[1510]: linux-amd64/LICENSE Aug 13 00:46:36.099042 tar[1510]: linux-amd64/README.md Aug 13 00:46:36.120930 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:46:36.281867 systemd-networkd[1452]: eth0: Gained IPv6LL Aug 13 00:46:36.282935 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Aug 13 00:46:36.285376 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:46:36.286239 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:46:36.288645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:36.290549 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:46:36.322194 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:46:36.666337 systemd-networkd[1452]: eth1: Gained IPv6LL Aug 13 00:46:36.667309 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Aug 13 00:46:37.388475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:37.389171 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:46:37.390440 systemd[1]: Startup finished in 3.452s (kernel) + 5.558s (initrd) + 6.242s (userspace) = 15.252s. Aug 13 00:46:37.395041 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:46:38.032059 kubelet[1675]: E0813 00:46:38.031985 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:46:38.035517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:46:38.035665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:46:38.036303 systemd[1]: kubelet.service: Consumed 1.276s CPU time, 265.7M memory peak. Aug 13 00:46:39.032905 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:46:39.034999 systemd[1]: Started sshd@0-134.199.224.26:22-139.178.68.195:48532.service - OpenSSH per-connection server daemon (139.178.68.195:48532). Aug 13 00:46:39.148353 sshd[1688]: Accepted publickey for core from 139.178.68.195 port 48532 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:39.151741 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:39.166532 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:46:39.167951 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:46:39.172739 systemd-logind[1499]: New session 1 of user core. Aug 13 00:46:39.210680 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:46:39.214351 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:46:39.232599 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:46:39.237254 systemd-logind[1499]: New session c1 of user core. Aug 13 00:46:39.422529 systemd[1692]: Queued start job for default target default.target. Aug 13 00:46:39.430315 systemd[1692]: Created slice app.slice - User Application Slice. Aug 13 00:46:39.430372 systemd[1692]: Reached target paths.target - Paths. Aug 13 00:46:39.430495 systemd[1692]: Reached target timers.target - Timers. Aug 13 00:46:39.433627 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:46:39.452734 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:46:39.452945 systemd[1692]: Reached target sockets.target - Sockets. Aug 13 00:46:39.453040 systemd[1692]: Reached target basic.target - Basic System. Aug 13 00:46:39.453109 systemd[1692]: Reached target default.target - Main User Target. Aug 13 00:46:39.453159 systemd[1692]: Startup finished in 204ms. Aug 13 00:46:39.453242 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:46:39.464829 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:46:39.543761 systemd[1]: Started sshd@1-134.199.224.26:22-139.178.68.195:48538.service - OpenSSH per-connection server daemon (139.178.68.195:48538). Aug 13 00:46:39.635362 sshd[1703]: Accepted publickey for core from 139.178.68.195 port 48538 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:39.637262 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:39.644380 systemd-logind[1499]: New session 2 of user core. Aug 13 00:46:39.653825 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:46:39.716738 sshd[1705]: Connection closed by 139.178.68.195 port 48538 Aug 13 00:46:39.717599 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:39.737116 systemd[1]: sshd@1-134.199.224.26:22-139.178.68.195:48538.service: Deactivated successfully. Aug 13 00:46:39.740802 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:46:39.742224 systemd-logind[1499]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:46:39.747481 systemd[1]: Started sshd@2-134.199.224.26:22-139.178.68.195:48542.service - OpenSSH per-connection server daemon (139.178.68.195:48542). Aug 13 00:46:39.748371 systemd-logind[1499]: Removed session 2. Aug 13 00:46:39.820305 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 48542 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:39.822485 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:39.828628 systemd-logind[1499]: New session 3 of user core. Aug 13 00:46:39.835796 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:46:39.895972 sshd[1713]: Connection closed by 139.178.68.195 port 48542 Aug 13 00:46:39.895356 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:39.912413 systemd[1]: sshd@2-134.199.224.26:22-139.178.68.195:48542.service: Deactivated successfully. Aug 13 00:46:39.915886 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:46:39.917032 systemd-logind[1499]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:46:39.921161 systemd[1]: Started sshd@3-134.199.224.26:22-139.178.68.195:48558.service - OpenSSH per-connection server daemon (139.178.68.195:48558). Aug 13 00:46:39.922575 systemd-logind[1499]: Removed session 3. Aug 13 00:46:39.993240 sshd[1719]: Accepted publickey for core from 139.178.68.195 port 48558 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:39.995690 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:40.002002 systemd-logind[1499]: New session 4 of user core. Aug 13 00:46:40.013939 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:46:40.078082 sshd[1721]: Connection closed by 139.178.68.195 port 48558 Aug 13 00:46:40.078779 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:40.094353 systemd[1]: sshd@3-134.199.224.26:22-139.178.68.195:48558.service: Deactivated successfully. Aug 13 00:46:40.096990 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:46:40.098902 systemd-logind[1499]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:46:40.101984 systemd[1]: Started sshd@4-134.199.224.26:22-139.178.68.195:59900.service - OpenSSH per-connection server daemon (139.178.68.195:59900). Aug 13 00:46:40.103587 systemd-logind[1499]: Removed session 4. Aug 13 00:46:40.162928 sshd[1727]: Accepted publickey for core from 139.178.68.195 port 59900 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:40.164938 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:40.171318 systemd-logind[1499]: New session 5 of user core. Aug 13 00:46:40.184753 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:46:40.254722 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:46:40.255845 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:40.269731 sudo[1730]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:40.274452 sshd[1729]: Connection closed by 139.178.68.195 port 59900 Aug 13 00:46:40.273832 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:40.287741 systemd[1]: sshd@4-134.199.224.26:22-139.178.68.195:59900.service: Deactivated successfully. Aug 13 00:46:40.290035 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:46:40.291432 systemd-logind[1499]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:46:40.295948 systemd[1]: Started sshd@5-134.199.224.26:22-139.178.68.195:59912.service - OpenSSH per-connection server daemon (139.178.68.195:59912). Aug 13 00:46:40.297245 systemd-logind[1499]: Removed session 5. Aug 13 00:46:40.359119 sshd[1736]: Accepted publickey for core from 139.178.68.195 port 59912 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:40.361486 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:40.369507 systemd-logind[1499]: New session 6 of user core. Aug 13 00:46:40.377866 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:46:40.439746 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:46:40.440106 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:40.446013 sudo[1740]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:40.453309 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:46:40.454076 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:40.466495 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:46:40.521931 augenrules[1762]: No rules Aug 13 00:46:40.523599 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:46:40.523940 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:46:40.526011 sudo[1739]: pam_unix(sudo:session): session closed for user root Aug 13 00:46:40.529072 sshd[1738]: Connection closed by 139.178.68.195 port 59912 Aug 13 00:46:40.529903 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Aug 13 00:46:40.541092 systemd[1]: sshd@5-134.199.224.26:22-139.178.68.195:59912.service: Deactivated successfully. Aug 13 00:46:40.543508 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:46:40.544511 systemd-logind[1499]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:46:40.549009 systemd[1]: Started sshd@6-134.199.224.26:22-139.178.68.195:59922.service - OpenSSH per-connection server daemon (139.178.68.195:59922). Aug 13 00:46:40.550817 systemd-logind[1499]: Removed session 6. Aug 13 00:46:40.611572 sshd[1771]: Accepted publickey for core from 139.178.68.195 port 59922 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:46:40.613302 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:46:40.619752 systemd-logind[1499]: New session 7 of user core. Aug 13 00:46:40.633847 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:46:40.692409 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:46:40.693099 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:46:41.191385 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:46:41.213048 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:46:41.549806 dockerd[1792]: time="2025-08-13T00:46:41.549192519Z" level=info msg="Starting up" Aug 13 00:46:41.551273 dockerd[1792]: time="2025-08-13T00:46:41.551223263Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 00:46:41.648125 dockerd[1792]: time="2025-08-13T00:46:41.648056869Z" level=info msg="Loading containers: start." Aug 13 00:46:41.659450 kernel: Initializing XFRM netlink socket Aug 13 00:46:41.913105 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Aug 13 00:46:41.913225 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Aug 13 00:46:41.927345 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Aug 13 00:46:41.973063 systemd-networkd[1452]: docker0: Link UP Aug 13 00:46:41.973794 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Aug 13 00:46:41.976906 dockerd[1792]: time="2025-08-13T00:46:41.976862527Z" level=info msg="Loading containers: done." Aug 13 00:46:41.998451 dockerd[1792]: time="2025-08-13T00:46:41.997332761Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:46:41.998451 dockerd[1792]: time="2025-08-13T00:46:41.997457497Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 00:46:41.998451 dockerd[1792]: time="2025-08-13T00:46:41.997592112Z" level=info msg="Initializing buildkit" Aug 13 00:46:42.000395 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck68410611-merged.mount: Deactivated successfully. Aug 13 00:46:42.031280 dockerd[1792]: time="2025-08-13T00:46:42.031224315Z" level=info msg="Completed buildkit initialization" Aug 13 00:46:42.036706 dockerd[1792]: time="2025-08-13T00:46:42.036642844Z" level=info msg="Daemon has completed initialization" Aug 13 00:46:42.036855 dockerd[1792]: time="2025-08-13T00:46:42.036702665Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:46:42.037229 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:46:42.892898 containerd[1524]: time="2025-08-13T00:46:42.892843225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:46:43.453019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1635018746.mount: Deactivated successfully. Aug 13 00:46:44.574129 containerd[1524]: time="2025-08-13T00:46:44.574066444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:44.575058 containerd[1524]: time="2025-08-13T00:46:44.575029126Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 00:46:44.575720 containerd[1524]: time="2025-08-13T00:46:44.575448704Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:44.579780 containerd[1524]: time="2025-08-13T00:46:44.579706754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:44.583449 containerd[1524]: time="2025-08-13T00:46:44.582849905Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.689960987s" Aug 13 00:46:44.583449 containerd[1524]: time="2025-08-13T00:46:44.582908537Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 00:46:44.585571 containerd[1524]: time="2025-08-13T00:46:44.585530564Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:46:46.018197 containerd[1524]: time="2025-08-13T00:46:46.018136126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:46.019146 containerd[1524]: time="2025-08-13T00:46:46.019110229Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 00:46:46.020449 containerd[1524]: time="2025-08-13T00:46:46.019656390Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:46.021746 containerd[1524]: time="2025-08-13T00:46:46.021606268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:46.022625 containerd[1524]: time="2025-08-13T00:46:46.022593148Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.437022624s" Aug 13 00:46:46.022705 containerd[1524]: time="2025-08-13T00:46:46.022631073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 00:46:46.023446 containerd[1524]: time="2025-08-13T00:46:46.023108469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:46:47.260458 containerd[1524]: time="2025-08-13T00:46:47.258749055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:47.260458 containerd[1524]: time="2025-08-13T00:46:47.259705363Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:47.260458 containerd[1524]: time="2025-08-13T00:46:47.259757510Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 00:46:47.263120 containerd[1524]: time="2025-08-13T00:46:47.263071275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:47.264097 containerd[1524]: time="2025-08-13T00:46:47.264057790Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.240919042s" Aug 13 00:46:47.264253 containerd[1524]: time="2025-08-13T00:46:47.264238826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 00:46:47.264874 containerd[1524]: time="2025-08-13T00:46:47.264836711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:46:48.286395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:46:48.290671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:48.392096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065851393.mount: Deactivated successfully. Aug 13 00:46:48.481710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:48.492579 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:46:48.599078 kubelet[2080]: E0813 00:46:48.598800 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:46:48.605606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:46:48.605830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:46:48.606291 systemd[1]: kubelet.service: Consumed 225ms CPU time, 110.7M memory peak. Aug 13 00:46:49.070203 containerd[1524]: time="2025-08-13T00:46:49.070033322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:49.075307 containerd[1524]: time="2025-08-13T00:46:49.075239397Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 00:46:49.076397 containerd[1524]: time="2025-08-13T00:46:49.076306332Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:49.078272 containerd[1524]: time="2025-08-13T00:46:49.078215600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:49.079028 containerd[1524]: time="2025-08-13T00:46:49.078825554Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.813948522s" Aug 13 00:46:49.079028 containerd[1524]: time="2025-08-13T00:46:49.078869426Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 00:46:49.079687 containerd[1524]: time="2025-08-13T00:46:49.079665486Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:46:49.114439 systemd-resolved[1403]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Aug 13 00:46:49.588804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119711499.mount: Deactivated successfully. Aug 13 00:46:50.406858 containerd[1524]: time="2025-08-13T00:46:50.406774023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:50.408036 containerd[1524]: time="2025-08-13T00:46:50.407993304Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:46:50.409158 containerd[1524]: time="2025-08-13T00:46:50.408699377Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:50.412580 containerd[1524]: time="2025-08-13T00:46:50.412531428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:50.413503 containerd[1524]: time="2025-08-13T00:46:50.413459433Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.333678315s" Aug 13 00:46:50.413729 containerd[1524]: time="2025-08-13T00:46:50.413630272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:46:50.414355 containerd[1524]: time="2025-08-13T00:46:50.414308991Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:46:50.930142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315571120.mount: Deactivated successfully. Aug 13 00:46:50.935339 containerd[1524]: time="2025-08-13T00:46:50.934693932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:46:50.936135 containerd[1524]: time="2025-08-13T00:46:50.936104745Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:46:50.937078 containerd[1524]: time="2025-08-13T00:46:50.937048775Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:46:50.939829 containerd[1524]: time="2025-08-13T00:46:50.939780373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:46:50.940847 containerd[1524]: time="2025-08-13T00:46:50.940794707Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.315285ms" Aug 13 00:46:50.940847 containerd[1524]: time="2025-08-13T00:46:50.940845237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:46:50.941771 containerd[1524]: time="2025-08-13T00:46:50.941461495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:46:51.441187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406461209.mount: Deactivated successfully. Aug 13 00:46:52.217669 systemd-resolved[1403]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Aug 13 00:46:53.206493 containerd[1524]: time="2025-08-13T00:46:53.206414672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:53.207866 containerd[1524]: time="2025-08-13T00:46:53.207806361Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 00:46:53.209315 containerd[1524]: time="2025-08-13T00:46:53.209254353Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:53.212753 containerd[1524]: time="2025-08-13T00:46:53.212672945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:46:53.215454 containerd[1524]: time="2025-08-13T00:46:53.215352689Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.27386027s" Aug 13 00:46:53.215454 containerd[1524]: time="2025-08-13T00:46:53.215401298Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 00:46:56.432367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:56.432607 systemd[1]: kubelet.service: Consumed 225ms CPU time, 110.7M memory peak. Aug 13 00:46:56.435562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:56.473226 systemd[1]: Reload requested from client PID 2224 ('systemctl') (unit session-7.scope)... Aug 13 00:46:56.473248 systemd[1]: Reloading... Aug 13 00:46:56.655468 zram_generator::config[2276]: No configuration found. Aug 13 00:46:56.761915 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:46:56.909895 systemd[1]: Reloading finished in 436 ms. Aug 13 00:46:56.995289 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:46:56.995393 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:46:56.995740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:56.995800 systemd[1]: kubelet.service: Consumed 134ms CPU time, 98.4M memory peak. Aug 13 00:46:56.998085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:46:57.179885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:46:57.192137 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:46:57.254427 kubelet[2321]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:57.254427 kubelet[2321]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:46:57.254427 kubelet[2321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:46:57.254927 kubelet[2321]: I0813 00:46:57.254492 2321 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:46:57.525234 kubelet[2321]: I0813 00:46:57.525082 2321 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:46:57.525234 kubelet[2321]: I0813 00:46:57.525122 2321 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:46:57.526035 kubelet[2321]: I0813 00:46:57.525977 2321 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:46:57.557463 kubelet[2321]: E0813 00:46:57.557086 2321 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://134.199.224.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 134.199.224.26:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:57.558162 kubelet[2321]: I0813 00:46:57.558117 2321 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:46:57.570387 kubelet[2321]: I0813 00:46:57.570360 2321 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:46:57.575777 kubelet[2321]: I0813 00:46:57.575729 2321 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:46:57.576740 kubelet[2321]: I0813 00:46:57.576688 2321 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:46:57.577049 kubelet[2321]: I0813 00:46:57.576997 2321 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:46:57.577311 kubelet[2321]: I0813 00:46:57.577042 2321 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-a-508df13d84","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:46:57.577470 kubelet[2321]: I0813 00:46:57.577325 2321 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:46:57.577470 kubelet[2321]: I0813 00:46:57.577336 2321 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:46:57.577571 kubelet[2321]: I0813 00:46:57.577517 2321 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:57.580472 kubelet[2321]: I0813 00:46:57.579956 2321 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:46:57.580472 kubelet[2321]: I0813 00:46:57.579994 2321 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:46:57.580472 kubelet[2321]: I0813 00:46:57.580042 2321 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:46:57.580472 kubelet[2321]: I0813 00:46:57.580063 2321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:46:57.582064 kubelet[2321]: W0813 00:46:57.581987 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.224.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-a-508df13d84&limit=500&resourceVersion=0": dial tcp 134.199.224.26:6443: connect: connection refused Aug 13 00:46:57.582294 kubelet[2321]: E0813 00:46:57.582264 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.224.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.1.0-a-508df13d84&limit=500&resourceVersion=0\": dial tcp 134.199.224.26:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:57.584114 kubelet[2321]: W0813 00:46:57.584043 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.224.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.224.26:6443: connect: connection refused Aug 13 00:46:57.584329 kubelet[2321]: E0813 00:46:57.584296 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.224.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.224.26:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:57.584555 kubelet[2321]: I0813 00:46:57.584538 2321 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:46:57.588283 kubelet[2321]: I0813 00:46:57.588240 2321 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:46:57.588550 kubelet[2321]: W0813 00:46:57.588537 2321 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:46:57.589338 kubelet[2321]: I0813 00:46:57.589311 2321 server.go:1274] "Started kubelet" Aug 13 00:46:57.589558 kubelet[2321]: I0813 00:46:57.589528 2321 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:46:57.590773 kubelet[2321]: I0813 00:46:57.590570 2321 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:46:57.596325 kubelet[2321]: I0813 00:46:57.596112 2321 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:46:57.596633 kubelet[2321]: I0813 00:46:57.596618 2321 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:46:57.606572 kubelet[2321]: I0813 00:46:57.603786 2321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:46:57.612455 kubelet[2321]: E0813 00:46:57.597574 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.224.26:6443/api/v1/namespaces/default/events\": dial tcp 134.199.224.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.1.0-a-508df13d84.185b2d0deadf7ae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-a-508df13d84,UID:ci-4372.1.0-a-508df13d84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-a-508df13d84,},FirstTimestamp:2025-08-13 00:46:57.589279464 +0000 UTC m=+0.390380375,LastTimestamp:2025-08-13 00:46:57.589279464 +0000 UTC m=+0.390380375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-a-508df13d84,}" Aug 13 00:46:57.617538 kubelet[2321]: I0813 00:46:57.617496 2321 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:46:57.617837 kubelet[2321]: E0813 00:46:57.617813 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-508df13d84\" not found" Aug 13 00:46:57.618020 kubelet[2321]: I0813 00:46:57.617994 2321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:46:57.620221 kubelet[2321]: I0813 00:46:57.620189 2321 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:46:57.620349 kubelet[2321]: I0813 00:46:57.620275 2321 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:46:57.621168 kubelet[2321]: E0813 00:46:57.621132 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.224.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-508df13d84?timeout=10s\": dial tcp 134.199.224.26:6443: connect: connection refused" interval="200ms" Aug 13 00:46:57.622161 kubelet[2321]: W0813 00:46:57.622120 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.224.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.224.26:6443: connect: connection refused Aug 13 00:46:57.622314 kubelet[2321]: E0813 00:46:57.622290 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.224.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.224.26:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:57.622968 kubelet[2321]: I0813 00:46:57.622939 2321 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:46:57.623207 kubelet[2321]: I0813 00:46:57.623181 2321 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:46:57.630905 kubelet[2321]: I0813 00:46:57.630750 2321 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:46:57.642112 kubelet[2321]: I0813 00:46:57.639678 2321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:46:57.646409 kubelet[2321]: I0813 00:46:57.646344 2321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:46:57.646409 kubelet[2321]: I0813 00:46:57.646393 2321 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:46:57.646620 kubelet[2321]: I0813 00:46:57.646448 2321 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:46:57.646620 kubelet[2321]: E0813 00:46:57.646518 2321 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:46:57.657505 kubelet[2321]: W0813 00:46:57.657394 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.224.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.224.26:6443: connect: connection refused Aug 13 00:46:57.657777 kubelet[2321]: E0813 00:46:57.657525 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.224.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.224.26:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:57.663656 kubelet[2321]: I0813 00:46:57.663630 2321 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:46:57.664130 kubelet[2321]: I0813 00:46:57.663844 2321 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:46:57.664130 kubelet[2321]: I0813 00:46:57.663871 2321 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:46:57.665435 kubelet[2321]: I0813 00:46:57.665401 2321 policy_none.go:49] "None policy: Start" Aug 13 00:46:57.666344 kubelet[2321]: I0813 00:46:57.666314 2321 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:46:57.666344 kubelet[2321]: I0813 00:46:57.666349 2321 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:46:57.672946 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:46:57.687581 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:46:57.692204 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:46:57.710572 kubelet[2321]: I0813 00:46:57.710540 2321 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:46:57.710867 kubelet[2321]: I0813 00:46:57.710751 2321 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:46:57.710867 kubelet[2321]: I0813 00:46:57.710763 2321 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:46:57.711968 kubelet[2321]: I0813 00:46:57.711330 2321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:46:57.713814 kubelet[2321]: E0813 00:46:57.713788 2321 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.1.0-a-508df13d84\" not found" Aug 13 00:46:57.760122 systemd[1]: Created slice kubepods-burstable-podae06567375181bf17fc2ed379a8ab384.slice - libcontainer container kubepods-burstable-podae06567375181bf17fc2ed379a8ab384.slice. Aug 13 00:46:57.784376 systemd[1]: Created slice kubepods-burstable-pod68d195199b1aa417f3d6ba6692ccc9b4.slice - libcontainer container kubepods-burstable-pod68d195199b1aa417f3d6ba6692ccc9b4.slice. Aug 13 00:46:57.792127 systemd[1]: Created slice kubepods-burstable-pod58f9eee0198aabca6aa5af80dbb0b7bb.slice - libcontainer container kubepods-burstable-pod58f9eee0198aabca6aa5af80dbb0b7bb.slice. Aug 13 00:46:57.812590 kubelet[2321]: I0813 00:46:57.812498 2321 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.813033 kubelet[2321]: E0813 00:46:57.812997 2321 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.224.26:6443/api/v1/nodes\": dial tcp 134.199.224.26:6443: connect: connection refused" node="ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822205 kubelet[2321]: I0813 00:46:57.821889 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58f9eee0198aabca6aa5af80dbb0b7bb-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-a-508df13d84\" (UID: \"58f9eee0198aabca6aa5af80dbb0b7bb\") " pod="kube-system/kube-scheduler-ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822205 kubelet[2321]: I0813 00:46:57.821936 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae06567375181bf17fc2ed379a8ab384-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-a-508df13d84\" (UID: \"ae06567375181bf17fc2ed379a8ab384\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822205 kubelet[2321]: I0813 00:46:57.821956 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae06567375181bf17fc2ed379a8ab384-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-a-508df13d84\" (UID: \"ae06567375181bf17fc2ed379a8ab384\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822205 kubelet[2321]: I0813 00:46:57.821977 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822205 kubelet[2321]: I0813 00:46:57.822007 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822494 kubelet[2321]: I0813 00:46:57.822038 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822494 kubelet[2321]: I0813 00:46:57.822058 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822494 kubelet[2321]: I0813 00:46:57.822086 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae06567375181bf17fc2ed379a8ab384-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-a-508df13d84\" (UID: \"ae06567375181bf17fc2ed379a8ab384\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-508df13d84" Aug 13 00:46:57.822494 kubelet[2321]: E0813 00:46:57.822105 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.224.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-508df13d84?timeout=10s\": dial tcp 134.199.224.26:6443: connect: connection refused" interval="400ms" Aug 13 00:46:57.822494 kubelet[2321]: I0813 00:46:57.822122 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:46:58.015388 kubelet[2321]: I0813 00:46:58.015061 2321 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-508df13d84" Aug 13 00:46:58.015615 kubelet[2321]: E0813 00:46:58.015488 2321 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.224.26:6443/api/v1/nodes\": dial tcp 134.199.224.26:6443: connect: connection refused" node="ci-4372.1.0-a-508df13d84" Aug 13 00:46:58.080888 kubelet[2321]: E0813 00:46:58.080753 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.082548 containerd[1524]: time="2025-08-13T00:46:58.082485148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-a-508df13d84,Uid:ae06567375181bf17fc2ed379a8ab384,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:58.090211 kubelet[2321]: E0813 00:46:58.090038 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.096148 kubelet[2321]: E0813 00:46:58.095719 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.097738 containerd[1524]: time="2025-08-13T00:46:58.097684888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-a-508df13d84,Uid:68d195199b1aa417f3d6ba6692ccc9b4,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:58.098354 containerd[1524]: time="2025-08-13T00:46:58.098049703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-a-508df13d84,Uid:58f9eee0198aabca6aa5af80dbb0b7bb,Namespace:kube-system,Attempt:0,}" Aug 13 00:46:58.212514 containerd[1524]: time="2025-08-13T00:46:58.212437221Z" level=info msg="connecting to shim ffe4606b2aa780ac58237d1e601d3b8c16a49beae30ef2ccc57a08d12d7bd8f6" address="unix:///run/containerd/s/e65e14265e6d6c0bce6cce58eadbdae0cc95ec3ee6c7e400428eb5552a093bbe" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:58.213277 containerd[1524]: time="2025-08-13T00:46:58.213203668Z" level=info msg="connecting to shim ac7f142ea01e2b1f9dc828ea05e01d25db992e7f267495c3aa6363b76f58ef91" address="unix:///run/containerd/s/450820a7c2eef466174b718725ac0433b3b4a4f79838c4884a6f6b13ba03f7d2" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:58.217334 containerd[1524]: time="2025-08-13T00:46:58.217289825Z" level=info msg="connecting to shim 2efdae8d4b3e0d31e2544f81e35dba0c455e3e5115bc7d38cdd7a5b1a1de11fb" address="unix:///run/containerd/s/b30760a45105620bb3896e00eb265a8aecdc452fe248280f755559a95a73c52a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:46:58.223485 kubelet[2321]: E0813 00:46:58.223371 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.224.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.1.0-a-508df13d84?timeout=10s\": dial tcp 134.199.224.26:6443: connect: connection refused" interval="800ms" Aug 13 00:46:58.322760 systemd[1]: Started cri-containerd-2efdae8d4b3e0d31e2544f81e35dba0c455e3e5115bc7d38cdd7a5b1a1de11fb.scope - libcontainer container 2efdae8d4b3e0d31e2544f81e35dba0c455e3e5115bc7d38cdd7a5b1a1de11fb. Aug 13 00:46:58.324961 systemd[1]: Started cri-containerd-ac7f142ea01e2b1f9dc828ea05e01d25db992e7f267495c3aa6363b76f58ef91.scope - libcontainer container ac7f142ea01e2b1f9dc828ea05e01d25db992e7f267495c3aa6363b76f58ef91. Aug 13 00:46:58.327961 systemd[1]: Started cri-containerd-ffe4606b2aa780ac58237d1e601d3b8c16a49beae30ef2ccc57a08d12d7bd8f6.scope - libcontainer container ffe4606b2aa780ac58237d1e601d3b8c16a49beae30ef2ccc57a08d12d7bd8f6. Aug 13 00:46:58.417860 kubelet[2321]: I0813 00:46:58.417524 2321 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-508df13d84" Aug 13 00:46:58.419640 kubelet[2321]: E0813 00:46:58.419559 2321 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://134.199.224.26:6443/api/v1/nodes\": dial tcp 134.199.224.26:6443: connect: connection refused" node="ci-4372.1.0-a-508df13d84" Aug 13 00:46:58.452071 containerd[1524]: time="2025-08-13T00:46:58.451964008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.1.0-a-508df13d84,Uid:68d195199b1aa417f3d6ba6692ccc9b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac7f142ea01e2b1f9dc828ea05e01d25db992e7f267495c3aa6363b76f58ef91\"" Aug 13 00:46:58.453580 kubelet[2321]: E0813 00:46:58.453505 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.456685 containerd[1524]: time="2025-08-13T00:46:58.456605987Z" level=info msg="CreateContainer within sandbox \"ac7f142ea01e2b1f9dc828ea05e01d25db992e7f267495c3aa6363b76f58ef91\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:46:58.470406 containerd[1524]: time="2025-08-13T00:46:58.470345433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.1.0-a-508df13d84,Uid:ae06567375181bf17fc2ed379a8ab384,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffe4606b2aa780ac58237d1e601d3b8c16a49beae30ef2ccc57a08d12d7bd8f6\"" Aug 13 00:46:58.472663 containerd[1524]: time="2025-08-13T00:46:58.472334459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.1.0-a-508df13d84,Uid:58f9eee0198aabca6aa5af80dbb0b7bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2efdae8d4b3e0d31e2544f81e35dba0c455e3e5115bc7d38cdd7a5b1a1de11fb\"" Aug 13 00:46:58.472798 kubelet[2321]: E0813 00:46:58.472565 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.473677 kubelet[2321]: E0813 00:46:58.473636 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:58.476050 containerd[1524]: time="2025-08-13T00:46:58.475952110Z" level=info msg="CreateContainer within sandbox \"ffe4606b2aa780ac58237d1e601d3b8c16a49beae30ef2ccc57a08d12d7bd8f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:46:58.476303 containerd[1524]: time="2025-08-13T00:46:58.476209384Z" level=info msg="CreateContainer within sandbox \"2efdae8d4b3e0d31e2544f81e35dba0c455e3e5115bc7d38cdd7a5b1a1de11fb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:46:58.477550 containerd[1524]: time="2025-08-13T00:46:58.477470523Z" level=info msg="Container e3a4f554acc32f7c1122fa2d3f3a6a3a1d9627711665f67c54b7439ec3d58167: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:58.489947 containerd[1524]: time="2025-08-13T00:46:58.489874660Z" level=info msg="Container b4e9ccee78a131056a819e645d7bb949adbc105c18091d715bbd8d036099089b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:58.493131 containerd[1524]: time="2025-08-13T00:46:58.493078063Z" level=info msg="Container 5e8fa7ea0c43b69a0df19a59941db211d453d5e1e1c5561f0fd6c0d5c70a5a4d: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:46:58.500444 containerd[1524]: time="2025-08-13T00:46:58.500373870Z" level=info msg="CreateContainer within sandbox \"ac7f142ea01e2b1f9dc828ea05e01d25db992e7f267495c3aa6363b76f58ef91\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3a4f554acc32f7c1122fa2d3f3a6a3a1d9627711665f67c54b7439ec3d58167\"" Aug 13 00:46:58.502724 kubelet[2321]: W0813 00:46:58.502585 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.224.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.224.26:6443: connect: connection refused Aug 13 00:46:58.502869 kubelet[2321]: E0813 00:46:58.502714 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.224.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.224.26:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:46:58.506447 containerd[1524]: time="2025-08-13T00:46:58.505785094Z" level=info msg="StartContainer for \"e3a4f554acc32f7c1122fa2d3f3a6a3a1d9627711665f67c54b7439ec3d58167\"" Aug 13 00:46:58.509182 containerd[1524]: time="2025-08-13T00:46:58.509130972Z" level=info msg="connecting to shim e3a4f554acc32f7c1122fa2d3f3a6a3a1d9627711665f67c54b7439ec3d58167" address="unix:///run/containerd/s/450820a7c2eef466174b718725ac0433b3b4a4f79838c4884a6f6b13ba03f7d2" protocol=ttrpc version=3 Aug 13 00:46:58.512678 containerd[1524]: time="2025-08-13T00:46:58.512616356Z" level=info msg="CreateContainer within sandbox \"2efdae8d4b3e0d31e2544f81e35dba0c455e3e5115bc7d38cdd7a5b1a1de11fb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b4e9ccee78a131056a819e645d7bb949adbc105c18091d715bbd8d036099089b\"" Aug 13 00:46:58.515504 containerd[1524]: time="2025-08-13T00:46:58.515455248Z" level=info msg="StartContainer for \"b4e9ccee78a131056a819e645d7bb949adbc105c18091d715bbd8d036099089b\"" Aug 13 00:46:58.517044 containerd[1524]: time="2025-08-13T00:46:58.516987758Z" level=info msg="connecting to shim b4e9ccee78a131056a819e645d7bb949adbc105c18091d715bbd8d036099089b" address="unix:///run/containerd/s/b30760a45105620bb3896e00eb265a8aecdc452fe248280f755559a95a73c52a" protocol=ttrpc version=3 Aug 13 00:46:58.518306 containerd[1524]: time="2025-08-13T00:46:58.518250245Z" level=info msg="CreateContainer within sandbox \"ffe4606b2aa780ac58237d1e601d3b8c16a49beae30ef2ccc57a08d12d7bd8f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e8fa7ea0c43b69a0df19a59941db211d453d5e1e1c5561f0fd6c0d5c70a5a4d\"" Aug 13 00:46:58.519345 containerd[1524]: time="2025-08-13T00:46:58.519312786Z" level=info msg="StartContainer for \"5e8fa7ea0c43b69a0df19a59941db211d453d5e1e1c5561f0fd6c0d5c70a5a4d\"" Aug 13 00:46:58.523851 containerd[1524]: time="2025-08-13T00:46:58.523801377Z" level=info msg="connecting to shim 5e8fa7ea0c43b69a0df19a59941db211d453d5e1e1c5561f0fd6c0d5c70a5a4d" address="unix:///run/containerd/s/e65e14265e6d6c0bce6cce58eadbdae0cc95ec3ee6c7e400428eb5552a093bbe" protocol=ttrpc version=3 Aug 13 00:46:58.550763 systemd[1]: Started cri-containerd-e3a4f554acc32f7c1122fa2d3f3a6a3a1d9627711665f67c54b7439ec3d58167.scope - libcontainer container e3a4f554acc32f7c1122fa2d3f3a6a3a1d9627711665f67c54b7439ec3d58167. Aug 13 00:46:58.574734 systemd[1]: Started cri-containerd-5e8fa7ea0c43b69a0df19a59941db211d453d5e1e1c5561f0fd6c0d5c70a5a4d.scope - libcontainer container 5e8fa7ea0c43b69a0df19a59941db211d453d5e1e1c5561f0fd6c0d5c70a5a4d. Aug 13 00:46:58.578599 systemd[1]: Started cri-containerd-b4e9ccee78a131056a819e645d7bb949adbc105c18091d715bbd8d036099089b.scope - libcontainer container b4e9ccee78a131056a819e645d7bb949adbc105c18091d715bbd8d036099089b. Aug 13 00:46:58.718961 containerd[1524]: time="2025-08-13T00:46:58.718857826Z" level=info msg="StartContainer for \"e3a4f554acc32f7c1122fa2d3f3a6a3a1d9627711665f67c54b7439ec3d58167\" returns successfully" Aug 13 00:46:58.721470 containerd[1524]: time="2025-08-13T00:46:58.719982617Z" level=info msg="StartContainer for \"5e8fa7ea0c43b69a0df19a59941db211d453d5e1e1c5561f0fd6c0d5c70a5a4d\" returns successfully" Aug 13 00:46:58.733078 containerd[1524]: time="2025-08-13T00:46:58.733022017Z" level=info msg="StartContainer for \"b4e9ccee78a131056a819e645d7bb949adbc105c18091d715bbd8d036099089b\" returns successfully" Aug 13 00:46:59.222106 kubelet[2321]: I0813 00:46:59.222069 2321 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-508df13d84" Aug 13 00:46:59.717846 kubelet[2321]: E0813 00:46:59.717788 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:59.722041 kubelet[2321]: E0813 00:46:59.721992 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:46:59.729165 kubelet[2321]: E0813 00:46:59.729114 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:00.730543 kubelet[2321]: E0813 00:47:00.730503 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:00.731122 kubelet[2321]: E0813 00:47:00.730970 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:00.731846 kubelet[2321]: E0813 00:47:00.731810 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:01.385449 kubelet[2321]: E0813 00:47:01.383878 2321 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.1.0-a-508df13d84\" not found" node="ci-4372.1.0-a-508df13d84" Aug 13 00:47:01.500479 kubelet[2321]: E0813 00:47:01.500146 2321 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4372.1.0-a-508df13d84.185b2d0deadf7ae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.1.0-a-508df13d84,UID:ci-4372.1.0-a-508df13d84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.1.0-a-508df13d84,},FirstTimestamp:2025-08-13 00:46:57.589279464 +0000 UTC m=+0.390380375,LastTimestamp:2025-08-13 00:46:57.589279464 +0000 UTC m=+0.390380375,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.1.0-a-508df13d84,}" Aug 13 00:47:01.555150 kubelet[2321]: I0813 00:47:01.554733 2321 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.1.0-a-508df13d84" Aug 13 00:47:01.587459 kubelet[2321]: I0813 00:47:01.586638 2321 apiserver.go:52] "Watching apiserver" Aug 13 00:47:01.620749 kubelet[2321]: I0813 00:47:01.620693 2321 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:47:01.740582 kubelet[2321]: E0813 00:47:01.740522 2321 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:47:01.741035 kubelet[2321]: E0813 00:47:01.740735 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:01.741536 kubelet[2321]: E0813 00:47:01.741503 2321 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.1.0-a-508df13d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.1.0-a-508df13d84" Aug 13 00:47:01.741742 kubelet[2321]: E0813 00:47:01.741696 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:03.656280 systemd[1]: Reload requested from client PID 2592 ('systemctl') (unit session-7.scope)... Aug 13 00:47:03.656305 systemd[1]: Reloading... Aug 13 00:47:03.778473 zram_generator::config[2638]: No configuration found. Aug 13 00:47:03.898663 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:47:04.016516 kubelet[2321]: W0813 00:47:04.015393 2321 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:47:04.016516 kubelet[2321]: E0813 00:47:04.015654 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:04.095402 systemd[1]: Reloading finished in 438 ms. Aug 13 00:47:04.130557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:04.149113 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:47:04.149395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:04.149485 systemd[1]: kubelet.service: Consumed 907ms CPU time, 123.1M memory peak. Aug 13 00:47:04.152340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:47:04.348612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:47:04.362448 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:47:04.445447 kubelet[2686]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:04.445447 kubelet[2686]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:47:04.445447 kubelet[2686]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:47:04.446052 kubelet[2686]: I0813 00:47:04.445502 2686 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:47:04.455457 kubelet[2686]: I0813 00:47:04.454776 2686 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:47:04.455457 kubelet[2686]: I0813 00:47:04.454821 2686 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:47:04.455457 kubelet[2686]: I0813 00:47:04.455234 2686 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:47:04.457787 kubelet[2686]: I0813 00:47:04.457700 2686 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:47:04.462047 kubelet[2686]: I0813 00:47:04.461999 2686 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:47:04.468035 kubelet[2686]: I0813 00:47:04.468004 2686 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 00:47:04.478310 kubelet[2686]: I0813 00:47:04.478267 2686 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:47:04.478513 kubelet[2686]: I0813 00:47:04.478473 2686 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:47:04.479029 kubelet[2686]: I0813 00:47:04.478639 2686 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:47:04.480312 kubelet[2686]: I0813 00:47:04.478706 2686 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.1.0-a-508df13d84","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:47:04.481017 kubelet[2686]: I0813 00:47:04.480965 2686 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:47:04.481017 kubelet[2686]: I0813 00:47:04.480992 2686 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:47:04.481143 kubelet[2686]: I0813 00:47:04.481130 2686 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:04.481443 kubelet[2686]: I0813 00:47:04.481283 2686 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:47:04.481443 kubelet[2686]: I0813 00:47:04.481301 2686 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:47:04.481978 kubelet[2686]: I0813 00:47:04.481891 2686 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:47:04.481978 kubelet[2686]: I0813 00:47:04.481941 2686 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:47:04.485125 kubelet[2686]: I0813 00:47:04.485082 2686 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 00:47:04.485690 kubelet[2686]: I0813 00:47:04.485647 2686 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:47:04.486210 kubelet[2686]: I0813 00:47:04.486155 2686 server.go:1274] "Started kubelet" Aug 13 00:47:04.498865 kubelet[2686]: I0813 00:47:04.498818 2686 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:47:04.505926 kubelet[2686]: I0813 00:47:04.505867 2686 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:47:04.509023 kubelet[2686]: I0813 00:47:04.508322 2686 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:47:04.512745 kubelet[2686]: I0813 00:47:04.511222 2686 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:47:04.512745 kubelet[2686]: I0813 00:47:04.511454 2686 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:47:04.512745 kubelet[2686]: I0813 00:47:04.511703 2686 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:47:04.515027 kubelet[2686]: I0813 00:47:04.514544 2686 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:47:04.515027 kubelet[2686]: E0813 00:47:04.514799 2686 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.1.0-a-508df13d84\" not found" Aug 13 00:47:04.515278 kubelet[2686]: I0813 00:47:04.515137 2686 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:47:04.515278 kubelet[2686]: I0813 00:47:04.515267 2686 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:47:04.525304 kubelet[2686]: I0813 00:47:04.525245 2686 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:47:04.525468 kubelet[2686]: I0813 00:47:04.525368 2686 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:47:04.533454 kubelet[2686]: I0813 00:47:04.533238 2686 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:47:04.537693 kubelet[2686]: I0813 00:47:04.537639 2686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:47:04.539277 kubelet[2686]: I0813 00:47:04.539232 2686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:47:04.539277 kubelet[2686]: I0813 00:47:04.539263 2686 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:47:04.539277 kubelet[2686]: I0813 00:47:04.539284 2686 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:47:04.541718 kubelet[2686]: E0813 00:47:04.539336 2686 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:47:04.552461 kubelet[2686]: E0813 00:47:04.551088 2686 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:47:04.639774 kubelet[2686]: E0813 00:47:04.639639 2686 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:47:04.674866 kubelet[2686]: I0813 00:47:04.674809 2686 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:47:04.675124 kubelet[2686]: I0813 00:47:04.675101 2686 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:47:04.675230 kubelet[2686]: I0813 00:47:04.675217 2686 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:47:04.675760 kubelet[2686]: I0813 00:47:04.675720 2686 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:47:04.675911 kubelet[2686]: I0813 00:47:04.675865 2686 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:47:04.675998 kubelet[2686]: I0813 00:47:04.675986 2686 policy_none.go:49] "None policy: Start" Aug 13 00:47:04.679283 kubelet[2686]: I0813 00:47:04.679151 2686 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:47:04.679283 kubelet[2686]: I0813 00:47:04.679188 2686 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:47:04.679913 kubelet[2686]: I0813 00:47:04.679887 2686 state_mem.go:75] "Updated machine memory state" Aug 13 00:47:04.697122 kubelet[2686]: I0813 00:47:04.697039 2686 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:47:04.703582 kubelet[2686]: I0813 00:47:04.703523 2686 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:47:04.703582 kubelet[2686]: I0813 00:47:04.703547 2686 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:47:04.704541 kubelet[2686]: I0813 00:47:04.704082 2686 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:47:04.818689 kubelet[2686]: I0813 00:47:04.818453 2686 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.828408 kubelet[2686]: I0813 00:47:04.828367 2686 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.828712 kubelet[2686]: I0813 00:47:04.828694 2686 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.853497 kubelet[2686]: W0813 00:47:04.852472 2686 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:47:04.853497 kubelet[2686]: W0813 00:47:04.852548 2686 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:47:04.853864 kubelet[2686]: W0813 00:47:04.853790 2686 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:47:04.854131 kubelet[2686]: E0813 00:47:04.853946 2686 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372.1.0-a-508df13d84\" already exists" pod="kube-system/kube-scheduler-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918559 kubelet[2686]: I0813 00:47:04.917706 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae06567375181bf17fc2ed379a8ab384-k8s-certs\") pod \"kube-apiserver-ci-4372.1.0-a-508df13d84\" (UID: \"ae06567375181bf17fc2ed379a8ab384\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918559 kubelet[2686]: I0813 00:47:04.917765 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae06567375181bf17fc2ed379a8ab384-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.1.0-a-508df13d84\" (UID: \"ae06567375181bf17fc2ed379a8ab384\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918559 kubelet[2686]: I0813 00:47:04.917805 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-ca-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918559 kubelet[2686]: I0813 00:47:04.918233 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918559 kubelet[2686]: I0813 00:47:04.918286 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-k8s-certs\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918999 kubelet[2686]: I0813 00:47:04.918302 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-kubeconfig\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918999 kubelet[2686]: I0813 00:47:04.918320 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68d195199b1aa417f3d6ba6692ccc9b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.1.0-a-508df13d84\" (UID: \"68d195199b1aa417f3d6ba6692ccc9b4\") " pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918999 kubelet[2686]: I0813 00:47:04.918360 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58f9eee0198aabca6aa5af80dbb0b7bb-kubeconfig\") pod \"kube-scheduler-ci-4372.1.0-a-508df13d84\" (UID: \"58f9eee0198aabca6aa5af80dbb0b7bb\") " pod="kube-system/kube-scheduler-ci-4372.1.0-a-508df13d84" Aug 13 00:47:04.918999 kubelet[2686]: I0813 00:47:04.918376 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae06567375181bf17fc2ed379a8ab384-ca-certs\") pod \"kube-apiserver-ci-4372.1.0-a-508df13d84\" (UID: \"ae06567375181bf17fc2ed379a8ab384\") " pod="kube-system/kube-apiserver-ci-4372.1.0-a-508df13d84" Aug 13 00:47:05.153894 kubelet[2686]: E0813 00:47:05.153824 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:05.154344 kubelet[2686]: E0813 00:47:05.154298 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:05.156267 kubelet[2686]: E0813 00:47:05.154850 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:05.486108 kubelet[2686]: I0813 00:47:05.486002 2686 apiserver.go:52] "Watching apiserver" Aug 13 00:47:05.515924 kubelet[2686]: I0813 00:47:05.515879 2686 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:47:05.615480 kubelet[2686]: E0813 00:47:05.614197 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:05.617176 kubelet[2686]: E0813 00:47:05.617123 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:05.619015 kubelet[2686]: E0813 00:47:05.618968 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:05.658769 kubelet[2686]: I0813 00:47:05.658692 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.1.0-a-508df13d84" podStartSLOduration=1.6586741470000002 podStartE2EDuration="1.658674147s" podCreationTimestamp="2025-08-13 00:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:05.658390806 +0000 UTC m=+1.288609255" watchObservedRunningTime="2025-08-13 00:47:05.658674147 +0000 UTC m=+1.288892596" Aug 13 00:47:05.694147 kubelet[2686]: I0813 00:47:05.694070 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.1.0-a-508df13d84" podStartSLOduration=1.694040671 podStartE2EDuration="1.694040671s" podCreationTimestamp="2025-08-13 00:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:05.681257108 +0000 UTC m=+1.311475555" watchObservedRunningTime="2025-08-13 00:47:05.694040671 +0000 UTC m=+1.324259130" Aug 13 00:47:05.709403 kubelet[2686]: I0813 00:47:05.709307 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.1.0-a-508df13d84" podStartSLOduration=1.7092894429999999 podStartE2EDuration="1.709289443s" podCreationTimestamp="2025-08-13 00:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:05.696372985 +0000 UTC m=+1.326591439" watchObservedRunningTime="2025-08-13 00:47:05.709289443 +0000 UTC m=+1.339507896" Aug 13 00:47:06.616701 kubelet[2686]: E0813 00:47:06.616656 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:09.176703 kubelet[2686]: I0813 00:47:09.176634 2686 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:47:09.177872 containerd[1524]: time="2025-08-13T00:47:09.177783524Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:47:09.179058 kubelet[2686]: I0813 00:47:09.178830 2686 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:47:10.104179 systemd[1]: Created slice kubepods-besteffort-podc8968ab4_ac94_47fb_b667_77f1dd6b7760.slice - libcontainer container kubepods-besteffort-podc8968ab4_ac94_47fb_b667_77f1dd6b7760.slice. Aug 13 00:47:10.158756 kubelet[2686]: I0813 00:47:10.158599 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8968ab4-ac94-47fb-b667-77f1dd6b7760-xtables-lock\") pod \"kube-proxy-llfdx\" (UID: \"c8968ab4-ac94-47fb-b667-77f1dd6b7760\") " pod="kube-system/kube-proxy-llfdx" Aug 13 00:47:10.158756 kubelet[2686]: I0813 00:47:10.158658 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8968ab4-ac94-47fb-b667-77f1dd6b7760-lib-modules\") pod \"kube-proxy-llfdx\" (UID: \"c8968ab4-ac94-47fb-b667-77f1dd6b7760\") " pod="kube-system/kube-proxy-llfdx" Aug 13 00:47:10.158756 kubelet[2686]: I0813 00:47:10.158676 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2cv\" (UniqueName: \"kubernetes.io/projected/c8968ab4-ac94-47fb-b667-77f1dd6b7760-kube-api-access-vt2cv\") pod \"kube-proxy-llfdx\" (UID: \"c8968ab4-ac94-47fb-b667-77f1dd6b7760\") " pod="kube-system/kube-proxy-llfdx" Aug 13 00:47:10.158756 kubelet[2686]: I0813 00:47:10.158695 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8968ab4-ac94-47fb-b667-77f1dd6b7760-kube-proxy\") pod \"kube-proxy-llfdx\" (UID: \"c8968ab4-ac94-47fb-b667-77f1dd6b7760\") " pod="kube-system/kube-proxy-llfdx" Aug 13 00:47:10.301999 systemd[1]: Created slice kubepods-besteffort-pod9c34371c_954e_404c_86cf_076cee928dff.slice - libcontainer container kubepods-besteffort-pod9c34371c_954e_404c_86cf_076cee928dff.slice. Aug 13 00:47:10.360537 kubelet[2686]: I0813 00:47:10.360320 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txm6s\" (UniqueName: \"kubernetes.io/projected/9c34371c-954e-404c-86cf-076cee928dff-kube-api-access-txm6s\") pod \"tigera-operator-5bf8dfcb4-kmshq\" (UID: \"9c34371c-954e-404c-86cf-076cee928dff\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-kmshq" Aug 13 00:47:10.361664 kubelet[2686]: I0813 00:47:10.361526 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c34371c-954e-404c-86cf-076cee928dff-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-kmshq\" (UID: \"9c34371c-954e-404c-86cf-076cee928dff\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-kmshq" Aug 13 00:47:10.418230 kubelet[2686]: E0813 00:47:10.417738 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:10.419516 containerd[1524]: time="2025-08-13T00:47:10.419464872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-llfdx,Uid:c8968ab4-ac94-47fb-b667-77f1dd6b7760,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:10.442760 containerd[1524]: time="2025-08-13T00:47:10.442625487Z" level=info msg="connecting to shim 6134e72623bf2b89aa12d87ef513e071c6e5659d96fce38d4359e68d9d60d9e9" address="unix:///run/containerd/s/29fc85be89c75d1ac2d9a74aa416ef55cf9692379a9a3b13f7f94c041e112961" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:10.478660 systemd[1]: Started cri-containerd-6134e72623bf2b89aa12d87ef513e071c6e5659d96fce38d4359e68d9d60d9e9.scope - libcontainer container 6134e72623bf2b89aa12d87ef513e071c6e5659d96fce38d4359e68d9d60d9e9. Aug 13 00:47:10.517365 containerd[1524]: time="2025-08-13T00:47:10.517306029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-llfdx,Uid:c8968ab4-ac94-47fb-b667-77f1dd6b7760,Namespace:kube-system,Attempt:0,} returns sandbox id \"6134e72623bf2b89aa12d87ef513e071c6e5659d96fce38d4359e68d9d60d9e9\"" Aug 13 00:47:10.518578 kubelet[2686]: E0813 00:47:10.518551 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:10.521928 containerd[1524]: time="2025-08-13T00:47:10.521887317Z" level=info msg="CreateContainer within sandbox \"6134e72623bf2b89aa12d87ef513e071c6e5659d96fce38d4359e68d9d60d9e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:47:10.537659 containerd[1524]: time="2025-08-13T00:47:10.537544267Z" level=info msg="Container a8b0d70bb609ac2a5c60da4aa7e9b1b9686a40761d76c5bbef984ee1845120ee: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:10.547372 containerd[1524]: time="2025-08-13T00:47:10.547318312Z" level=info msg="CreateContainer within sandbox \"6134e72623bf2b89aa12d87ef513e071c6e5659d96fce38d4359e68d9d60d9e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a8b0d70bb609ac2a5c60da4aa7e9b1b9686a40761d76c5bbef984ee1845120ee\"" Aug 13 00:47:10.549081 containerd[1524]: time="2025-08-13T00:47:10.549034529Z" level=info msg="StartContainer for \"a8b0d70bb609ac2a5c60da4aa7e9b1b9686a40761d76c5bbef984ee1845120ee\"" Aug 13 00:47:10.552675 containerd[1524]: time="2025-08-13T00:47:10.552593529Z" level=info msg="connecting to shim a8b0d70bb609ac2a5c60da4aa7e9b1b9686a40761d76c5bbef984ee1845120ee" address="unix:///run/containerd/s/29fc85be89c75d1ac2d9a74aa416ef55cf9692379a9a3b13f7f94c041e112961" protocol=ttrpc version=3 Aug 13 00:47:10.578769 systemd[1]: Started cri-containerd-a8b0d70bb609ac2a5c60da4aa7e9b1b9686a40761d76c5bbef984ee1845120ee.scope - libcontainer container a8b0d70bb609ac2a5c60da4aa7e9b1b9686a40761d76c5bbef984ee1845120ee. Aug 13 00:47:10.608332 containerd[1524]: time="2025-08-13T00:47:10.608272146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-kmshq,Uid:9c34371c-954e-404c-86cf-076cee928dff,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:47:10.658250 containerd[1524]: time="2025-08-13T00:47:10.658105745Z" level=info msg="connecting to shim f3351cb0bd8e1ea399152c14e8530801c9c620c9b4393ac5455112529ead9549" address="unix:///run/containerd/s/d0c9a83a40ec831ec3ae5f337392aa574e172be158bcabb01893d8da65fd1e1f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:10.666647 containerd[1524]: time="2025-08-13T00:47:10.666598424Z" level=info msg="StartContainer for \"a8b0d70bb609ac2a5c60da4aa7e9b1b9686a40761d76c5bbef984ee1845120ee\" returns successfully" Aug 13 00:47:10.699661 systemd[1]: Started cri-containerd-f3351cb0bd8e1ea399152c14e8530801c9c620c9b4393ac5455112529ead9549.scope - libcontainer container f3351cb0bd8e1ea399152c14e8530801c9c620c9b4393ac5455112529ead9549. Aug 13 00:47:10.777494 containerd[1524]: time="2025-08-13T00:47:10.777366443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-kmshq,Uid:9c34371c-954e-404c-86cf-076cee928dff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f3351cb0bd8e1ea399152c14e8530801c9c620c9b4393ac5455112529ead9549\"" Aug 13 00:47:10.781752 containerd[1524]: time="2025-08-13T00:47:10.781696780Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:47:10.785334 systemd-resolved[1403]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Aug 13 00:47:11.284352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548168768.mount: Deactivated successfully. Aug 13 00:47:11.645549 kubelet[2686]: E0813 00:47:11.645364 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:11.660602 kubelet[2686]: I0813 00:47:11.660534 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-llfdx" podStartSLOduration=1.660511998 podStartE2EDuration="1.660511998s" podCreationTimestamp="2025-08-13 00:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:11.658848347 +0000 UTC m=+7.289066799" watchObservedRunningTime="2025-08-13 00:47:11.660511998 +0000 UTC m=+7.290730450" Aug 13 00:47:11.666656 kubelet[2686]: E0813 00:47:11.666606 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:12.467776 systemd-timesyncd[1421]: Contacted time server 23.150.40.242:123 (2.flatcar.pool.ntp.org). Aug 13 00:47:12.468833 systemd-timesyncd[1421]: Initial clock synchronization to Wed 2025-08-13 00:47:12.362034 UTC. Aug 13 00:47:12.612603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078100054.mount: Deactivated successfully. Aug 13 00:47:12.649394 kubelet[2686]: E0813 00:47:12.649207 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:12.649394 kubelet[2686]: E0813 00:47:12.649300 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:13.558864 kubelet[2686]: E0813 00:47:13.558459 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:13.650798 kubelet[2686]: E0813 00:47:13.650707 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:13.651890 kubelet[2686]: E0813 00:47:13.650707 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:14.180634 kubelet[2686]: E0813 00:47:14.180038 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:14.652899 kubelet[2686]: E0813 00:47:14.652402 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:15.877756 containerd[1524]: time="2025-08-13T00:47:15.877480315Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:15.878257 containerd[1524]: time="2025-08-13T00:47:15.878211177Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 00:47:15.879799 containerd[1524]: time="2025-08-13T00:47:15.879724360Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:15.882097 containerd[1524]: time="2025-08-13T00:47:15.882036753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:15.883041 containerd[1524]: time="2025-08-13T00:47:15.883000659Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 5.101252009s" Aug 13 00:47:15.883266 containerd[1524]: time="2025-08-13T00:47:15.883245901Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 00:47:15.889866 containerd[1524]: time="2025-08-13T00:47:15.889044663Z" level=info msg="CreateContainer within sandbox \"f3351cb0bd8e1ea399152c14e8530801c9c620c9b4393ac5455112529ead9549\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:47:15.903458 containerd[1524]: time="2025-08-13T00:47:15.902341704Z" level=info msg="Container 75787befc4455d0c71ab4da32ef7fd6b29f3c462652d40caf3a5c5547cf2b1b7: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:15.913564 containerd[1524]: time="2025-08-13T00:47:15.913510316Z" level=info msg="CreateContainer within sandbox \"f3351cb0bd8e1ea399152c14e8530801c9c620c9b4393ac5455112529ead9549\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"75787befc4455d0c71ab4da32ef7fd6b29f3c462652d40caf3a5c5547cf2b1b7\"" Aug 13 00:47:15.914218 containerd[1524]: time="2025-08-13T00:47:15.914180723Z" level=info msg="StartContainer for \"75787befc4455d0c71ab4da32ef7fd6b29f3c462652d40caf3a5c5547cf2b1b7\"" Aug 13 00:47:15.916910 containerd[1524]: time="2025-08-13T00:47:15.916839396Z" level=info msg="connecting to shim 75787befc4455d0c71ab4da32ef7fd6b29f3c462652d40caf3a5c5547cf2b1b7" address="unix:///run/containerd/s/d0c9a83a40ec831ec3ae5f337392aa574e172be158bcabb01893d8da65fd1e1f" protocol=ttrpc version=3 Aug 13 00:47:15.952849 systemd[1]: Started cri-containerd-75787befc4455d0c71ab4da32ef7fd6b29f3c462652d40caf3a5c5547cf2b1b7.scope - libcontainer container 75787befc4455d0c71ab4da32ef7fd6b29f3c462652d40caf3a5c5547cf2b1b7. Aug 13 00:47:16.000390 containerd[1524]: time="2025-08-13T00:47:16.000342318Z" level=info msg="StartContainer for \"75787befc4455d0c71ab4da32ef7fd6b29f3c462652d40caf3a5c5547cf2b1b7\" returns successfully" Aug 13 00:47:20.142644 update_engine[1500]: I20250813 00:47:20.141466 1500 update_attempter.cc:509] Updating boot flags... Aug 13 00:47:23.288869 sudo[1774]: pam_unix(sudo:session): session closed for user root Aug 13 00:47:23.291953 sshd[1773]: Connection closed by 139.178.68.195 port 59922 Aug 13 00:47:23.292744 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Aug 13 00:47:23.299279 systemd[1]: sshd@6-134.199.224.26:22-139.178.68.195:59922.service: Deactivated successfully. Aug 13 00:47:23.305718 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:47:23.305952 systemd[1]: session-7.scope: Consumed 5.635s CPU time, 157.5M memory peak. Aug 13 00:47:23.309982 systemd-logind[1499]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:47:23.313287 systemd-logind[1499]: Removed session 7. Aug 13 00:47:27.370683 kubelet[2686]: I0813 00:47:27.370541 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-kmshq" podStartSLOduration=12.265470434 podStartE2EDuration="17.370493909s" podCreationTimestamp="2025-08-13 00:47:10 +0000 UTC" firstStartedPulling="2025-08-13 00:47:10.779918881 +0000 UTC m=+6.410137325" lastFinishedPulling="2025-08-13 00:47:15.884942356 +0000 UTC m=+11.515160800" observedRunningTime="2025-08-13 00:47:16.674948436 +0000 UTC m=+12.305166886" watchObservedRunningTime="2025-08-13 00:47:27.370493909 +0000 UTC m=+23.000712364" Aug 13 00:47:27.398136 systemd[1]: Created slice kubepods-besteffort-pod671615fc_bcbe_4600_9332_068ab1a3c52b.slice - libcontainer container kubepods-besteffort-pod671615fc_bcbe_4600_9332_068ab1a3c52b.slice. Aug 13 00:47:27.472746 kubelet[2686]: I0813 00:47:27.472680 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5kl4\" (UniqueName: \"kubernetes.io/projected/671615fc-bcbe-4600-9332-068ab1a3c52b-kube-api-access-k5kl4\") pod \"calico-typha-6445687d55-phnhc\" (UID: \"671615fc-bcbe-4600-9332-068ab1a3c52b\") " pod="calico-system/calico-typha-6445687d55-phnhc" Aug 13 00:47:27.473345 kubelet[2686]: I0813 00:47:27.473126 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/671615fc-bcbe-4600-9332-068ab1a3c52b-typha-certs\") pod \"calico-typha-6445687d55-phnhc\" (UID: \"671615fc-bcbe-4600-9332-068ab1a3c52b\") " pod="calico-system/calico-typha-6445687d55-phnhc" Aug 13 00:47:27.473345 kubelet[2686]: I0813 00:47:27.473190 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/671615fc-bcbe-4600-9332-068ab1a3c52b-tigera-ca-bundle\") pod \"calico-typha-6445687d55-phnhc\" (UID: \"671615fc-bcbe-4600-9332-068ab1a3c52b\") " pod="calico-system/calico-typha-6445687d55-phnhc" Aug 13 00:47:27.710723 kubelet[2686]: E0813 00:47:27.710651 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:27.712374 containerd[1524]: time="2025-08-13T00:47:27.711887615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6445687d55-phnhc,Uid:671615fc-bcbe-4600-9332-068ab1a3c52b,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:27.754949 containerd[1524]: time="2025-08-13T00:47:27.754883617Z" level=info msg="connecting to shim 0a5d52dc619d765833c13967348ae292729a0c25115bed4688c2d274f976401c" address="unix:///run/containerd/s/3cf20b3e92f6ac6a32740a3ccca5343611032b153fc2c0feaaed7b86625c3661" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:27.820703 systemd[1]: Started cri-containerd-0a5d52dc619d765833c13967348ae292729a0c25115bed4688c2d274f976401c.scope - libcontainer container 0a5d52dc619d765833c13967348ae292729a0c25115bed4688c2d274f976401c. Aug 13 00:47:27.977541 kubelet[2686]: W0813 00:47:27.977301 2686 reflector.go:561] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:27.978521 kubelet[2686]: E0813 00:47:27.978209 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:27.982026 kubelet[2686]: W0813 00:47:27.981731 2686 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:27.982713 kubelet[2686]: E0813 00:47:27.981989 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:27.982241 systemd[1]: Created slice kubepods-besteffort-poddc095366_e92e_4b67_b559_06a12970ac23.slice - libcontainer container kubepods-besteffort-poddc095366_e92e_4b67_b559_06a12970ac23.slice. Aug 13 00:47:28.051110 containerd[1524]: time="2025-08-13T00:47:28.050990533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6445687d55-phnhc,Uid:671615fc-bcbe-4600-9332-068ab1a3c52b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a5d52dc619d765833c13967348ae292729a0c25115bed4688c2d274f976401c\"" Aug 13 00:47:28.062184 kubelet[2686]: E0813 00:47:28.062102 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:28.072963 containerd[1524]: time="2025-08-13T00:47:28.072850244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:47:28.077693 kubelet[2686]: I0813 00:47:28.077601 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc095366-e92e-4b67-b559-06a12970ac23-tigera-ca-bundle\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078133 kubelet[2686]: I0813 00:47:28.077941 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-var-lib-calico\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078133 kubelet[2686]: I0813 00:47:28.078006 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-cni-net-dir\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078133 kubelet[2686]: I0813 00:47:28.078032 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-cni-bin-dir\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078133 kubelet[2686]: I0813 00:47:28.078092 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-lib-modules\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078681 kubelet[2686]: I0813 00:47:28.078224 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glpls\" (UniqueName: \"kubernetes.io/projected/dc095366-e92e-4b67-b559-06a12970ac23-kube-api-access-glpls\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078681 kubelet[2686]: I0813 00:47:28.078285 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dc095366-e92e-4b67-b559-06a12970ac23-node-certs\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078681 kubelet[2686]: I0813 00:47:28.078317 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-var-run-calico\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078681 kubelet[2686]: I0813 00:47:28.078339 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-xtables-lock\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.078681 kubelet[2686]: I0813 00:47:28.078412 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-policysync\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.080475 kubelet[2686]: I0813 00:47:28.078460 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-cni-log-dir\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.080475 kubelet[2686]: I0813 00:47:28.078484 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dc095366-e92e-4b67-b559-06a12970ac23-flexvol-driver-host\") pod \"calico-node-wphjn\" (UID: \"dc095366-e92e-4b67-b559-06a12970ac23\") " pod="calico-system/calico-node-wphjn" Aug 13 00:47:28.181092 kubelet[2686]: E0813 00:47:28.180820 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpcdb" podUID="303998f8-9f7b-469e-a6ba-d51ab6db7a8a" Aug 13 00:47:28.183316 kubelet[2686]: E0813 00:47:28.183281 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.183316 kubelet[2686]: W0813 00:47:28.183311 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.183603 kubelet[2686]: E0813 00:47:28.183368 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.208306 kubelet[2686]: E0813 00:47:28.207163 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.208306 kubelet[2686]: W0813 00:47:28.207193 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.208306 kubelet[2686]: E0813 00:47:28.207220 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.270815 kubelet[2686]: E0813 00:47:28.270523 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.270815 kubelet[2686]: W0813 00:47:28.270571 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.270815 kubelet[2686]: E0813 00:47:28.270603 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.274556 kubelet[2686]: E0813 00:47:28.270893 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.274556 kubelet[2686]: W0813 00:47:28.270908 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.274556 kubelet[2686]: E0813 00:47:28.270926 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.274556 kubelet[2686]: E0813 00:47:28.271128 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.274556 kubelet[2686]: W0813 00:47:28.271141 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.274556 kubelet[2686]: E0813 00:47:28.271157 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.274556 kubelet[2686]: E0813 00:47:28.271329 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.274556 kubelet[2686]: W0813 00:47:28.271338 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.274556 kubelet[2686]: E0813 00:47:28.271348 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.274556 kubelet[2686]: E0813 00:47:28.271556 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.275521 kubelet[2686]: W0813 00:47:28.271567 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.275521 kubelet[2686]: E0813 00:47:28.271580 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.275521 kubelet[2686]: E0813 00:47:28.271742 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.275521 kubelet[2686]: W0813 00:47:28.271751 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.275521 kubelet[2686]: E0813 00:47:28.271762 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.275521 kubelet[2686]: E0813 00:47:28.271916 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.275521 kubelet[2686]: W0813 00:47:28.271925 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.275521 kubelet[2686]: E0813 00:47:28.271935 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.275521 kubelet[2686]: E0813 00:47:28.272148 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.275521 kubelet[2686]: W0813 00:47:28.272163 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.275980 kubelet[2686]: E0813 00:47:28.272180 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.275980 kubelet[2686]: E0813 00:47:28.272563 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.275980 kubelet[2686]: W0813 00:47:28.272577 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.275980 kubelet[2686]: E0813 00:47:28.272593 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.275980 kubelet[2686]: E0813 00:47:28.272925 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.275980 kubelet[2686]: W0813 00:47:28.272940 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.275980 kubelet[2686]: E0813 00:47:28.272954 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.275980 kubelet[2686]: E0813 00:47:28.273508 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.275980 kubelet[2686]: W0813 00:47:28.273521 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.275980 kubelet[2686]: E0813 00:47:28.273536 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.277327 kubelet[2686]: E0813 00:47:28.273730 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.277327 kubelet[2686]: W0813 00:47:28.273744 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.277327 kubelet[2686]: E0813 00:47:28.273757 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.277327 kubelet[2686]: E0813 00:47:28.274027 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.277327 kubelet[2686]: W0813 00:47:28.274038 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.277327 kubelet[2686]: E0813 00:47:28.274053 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.277327 kubelet[2686]: E0813 00:47:28.274245 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.277327 kubelet[2686]: W0813 00:47:28.274256 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.277327 kubelet[2686]: E0813 00:47:28.274269 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.277327 kubelet[2686]: E0813 00:47:28.274525 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.277933 kubelet[2686]: W0813 00:47:28.274539 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.277933 kubelet[2686]: E0813 00:47:28.274553 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.277933 kubelet[2686]: E0813 00:47:28.275900 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.277933 kubelet[2686]: W0813 00:47:28.275918 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.277933 kubelet[2686]: E0813 00:47:28.275939 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.277933 kubelet[2686]: E0813 00:47:28.276182 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.277933 kubelet[2686]: W0813 00:47:28.276195 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.277933 kubelet[2686]: E0813 00:47:28.276209 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.277933 kubelet[2686]: E0813 00:47:28.276443 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.277933 kubelet[2686]: W0813 00:47:28.276456 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.278167 kubelet[2686]: E0813 00:47:28.276470 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.278167 kubelet[2686]: E0813 00:47:28.276658 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.278167 kubelet[2686]: W0813 00:47:28.276673 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.278167 kubelet[2686]: E0813 00:47:28.276686 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.278167 kubelet[2686]: E0813 00:47:28.276879 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.278167 kubelet[2686]: W0813 00:47:28.276891 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.278167 kubelet[2686]: E0813 00:47:28.276904 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.281598 kubelet[2686]: E0813 00:47:28.281560 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.281598 kubelet[2686]: W0813 00:47:28.281589 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.281818 kubelet[2686]: E0813 00:47:28.281619 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.281818 kubelet[2686]: I0813 00:47:28.281663 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/303998f8-9f7b-469e-a6ba-d51ab6db7a8a-registration-dir\") pod \"csi-node-driver-tpcdb\" (UID: \"303998f8-9f7b-469e-a6ba-d51ab6db7a8a\") " pod="calico-system/csi-node-driver-tpcdb" Aug 13 00:47:28.282476 kubelet[2686]: E0813 00:47:28.281937 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.282476 kubelet[2686]: W0813 00:47:28.281958 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.282476 kubelet[2686]: E0813 00:47:28.281976 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.282476 kubelet[2686]: E0813 00:47:28.282181 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.282476 kubelet[2686]: W0813 00:47:28.282192 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.282476 kubelet[2686]: E0813 00:47:28.282206 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.282476 kubelet[2686]: I0813 00:47:28.282235 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/303998f8-9f7b-469e-a6ba-d51ab6db7a8a-socket-dir\") pod \"csi-node-driver-tpcdb\" (UID: \"303998f8-9f7b-469e-a6ba-d51ab6db7a8a\") " pod="calico-system/csi-node-driver-tpcdb" Aug 13 00:47:28.282476 kubelet[2686]: E0813 00:47:28.282465 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.282476 kubelet[2686]: W0813 00:47:28.282479 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.283805 kubelet[2686]: E0813 00:47:28.282496 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.283805 kubelet[2686]: I0813 00:47:28.282521 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/303998f8-9f7b-469e-a6ba-d51ab6db7a8a-varrun\") pod \"csi-node-driver-tpcdb\" (UID: \"303998f8-9f7b-469e-a6ba-d51ab6db7a8a\") " pod="calico-system/csi-node-driver-tpcdb" Aug 13 00:47:28.283805 kubelet[2686]: E0813 00:47:28.282750 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.283805 kubelet[2686]: W0813 00:47:28.282764 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.283805 kubelet[2686]: E0813 00:47:28.282786 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.283805 kubelet[2686]: E0813 00:47:28.283024 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.283805 kubelet[2686]: W0813 00:47:28.283038 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.283805 kubelet[2686]: E0813 00:47:28.283059 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.284670 kubelet[2686]: I0813 00:47:28.283088 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtkx\" (UniqueName: \"kubernetes.io/projected/303998f8-9f7b-469e-a6ba-d51ab6db7a8a-kube-api-access-fwtkx\") pod \"csi-node-driver-tpcdb\" (UID: \"303998f8-9f7b-469e-a6ba-d51ab6db7a8a\") " pod="calico-system/csi-node-driver-tpcdb" Aug 13 00:47:28.284670 kubelet[2686]: E0813 00:47:28.283335 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.284670 kubelet[2686]: W0813 00:47:28.283349 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.284670 kubelet[2686]: E0813 00:47:28.283364 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.284670 kubelet[2686]: E0813 00:47:28.283620 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.284670 kubelet[2686]: W0813 00:47:28.283632 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.284670 kubelet[2686]: E0813 00:47:28.283647 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.284670 kubelet[2686]: E0813 00:47:28.283815 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.284670 kubelet[2686]: W0813 00:47:28.283825 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.285479 kubelet[2686]: E0813 00:47:28.283838 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.285479 kubelet[2686]: E0813 00:47:28.284169 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.285479 kubelet[2686]: W0813 00:47:28.284181 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.285479 kubelet[2686]: E0813 00:47:28.284211 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.285479 kubelet[2686]: E0813 00:47:28.284428 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.285479 kubelet[2686]: W0813 00:47:28.284440 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.285479 kubelet[2686]: E0813 00:47:28.284455 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.285479 kubelet[2686]: E0813 00:47:28.284713 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.285479 kubelet[2686]: W0813 00:47:28.284724 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.285479 kubelet[2686]: E0813 00:47:28.284739 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.286436 kubelet[2686]: E0813 00:47:28.284973 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.286436 kubelet[2686]: W0813 00:47:28.284987 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.286436 kubelet[2686]: E0813 00:47:28.285000 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.286436 kubelet[2686]: E0813 00:47:28.285315 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.286436 kubelet[2686]: W0813 00:47:28.285328 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.286436 kubelet[2686]: E0813 00:47:28.285343 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.286436 kubelet[2686]: I0813 00:47:28.285385 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/303998f8-9f7b-469e-a6ba-d51ab6db7a8a-kubelet-dir\") pod \"csi-node-driver-tpcdb\" (UID: \"303998f8-9f7b-469e-a6ba-d51ab6db7a8a\") " pod="calico-system/csi-node-driver-tpcdb" Aug 13 00:47:28.286436 kubelet[2686]: E0813 00:47:28.285694 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.286436 kubelet[2686]: W0813 00:47:28.285706 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.286915 kubelet[2686]: E0813 00:47:28.285722 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.286915 kubelet[2686]: E0813 00:47:28.285925 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.286915 kubelet[2686]: W0813 00:47:28.285936 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.286915 kubelet[2686]: E0813 00:47:28.285949 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.387021 kubelet[2686]: E0813 00:47:28.386983 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.387021 kubelet[2686]: W0813 00:47:28.387013 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.387619 kubelet[2686]: E0813 00:47:28.387042 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.387619 kubelet[2686]: E0813 00:47:28.387311 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.387619 kubelet[2686]: W0813 00:47:28.387324 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.387619 kubelet[2686]: E0813 00:47:28.387342 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.387619 kubelet[2686]: E0813 00:47:28.387622 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.387816 kubelet[2686]: W0813 00:47:28.387634 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.387816 kubelet[2686]: E0813 00:47:28.387663 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.387905 kubelet[2686]: E0813 00:47:28.387891 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.387933 kubelet[2686]: W0813 00:47:28.387906 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.387958 kubelet[2686]: E0813 00:47:28.387936 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.388289 kubelet[2686]: E0813 00:47:28.388253 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.388289 kubelet[2686]: W0813 00:47:28.388276 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.388434 kubelet[2686]: E0813 00:47:28.388299 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.388598 kubelet[2686]: E0813 00:47:28.388574 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.388598 kubelet[2686]: W0813 00:47:28.388593 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.388710 kubelet[2686]: E0813 00:47:28.388623 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.388949 kubelet[2686]: E0813 00:47:28.388925 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.388949 kubelet[2686]: W0813 00:47:28.388946 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.389071 kubelet[2686]: E0813 00:47:28.388980 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.389326 kubelet[2686]: E0813 00:47:28.389300 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.389724 kubelet[2686]: W0813 00:47:28.389694 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.389812 kubelet[2686]: E0813 00:47:28.389732 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.390088 kubelet[2686]: E0813 00:47:28.390069 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.390088 kubelet[2686]: W0813 00:47:28.390085 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.390583 kubelet[2686]: E0813 00:47:28.390525 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.390855 kubelet[2686]: E0813 00:47:28.390833 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.390855 kubelet[2686]: W0813 00:47:28.390853 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.391085 kubelet[2686]: E0813 00:47:28.391067 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.391243 kubelet[2686]: E0813 00:47:28.391077 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.391243 kubelet[2686]: W0813 00:47:28.391162 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.391574 kubelet[2686]: E0813 00:47:28.391193 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.391801 kubelet[2686]: E0813 00:47:28.391781 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.391801 kubelet[2686]: W0813 00:47:28.391799 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.392010 kubelet[2686]: E0813 00:47:28.391990 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.392536 kubelet[2686]: E0813 00:47:28.392519 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.392536 kubelet[2686]: W0813 00:47:28.392536 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.392855 kubelet[2686]: E0813 00:47:28.392662 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.393007 kubelet[2686]: E0813 00:47:28.392861 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.393007 kubelet[2686]: W0813 00:47:28.392872 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.393007 kubelet[2686]: E0813 00:47:28.392958 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.393391 kubelet[2686]: E0813 00:47:28.393235 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.393391 kubelet[2686]: W0813 00:47:28.393250 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.393659 kubelet[2686]: E0813 00:47:28.393540 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.393875 kubelet[2686]: E0813 00:47:28.393853 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.393875 kubelet[2686]: W0813 00:47:28.393872 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.394249 kubelet[2686]: E0813 00:47:28.393908 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.394249 kubelet[2686]: E0813 00:47:28.394174 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.394249 kubelet[2686]: W0813 00:47:28.394188 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.395492 kubelet[2686]: E0813 00:47:28.395260 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.395492 kubelet[2686]: E0813 00:47:28.395383 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.395492 kubelet[2686]: W0813 00:47:28.395396 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.395787 kubelet[2686]: E0813 00:47:28.395602 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.395787 kubelet[2686]: E0813 00:47:28.395719 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.395787 kubelet[2686]: W0813 00:47:28.395739 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.395943 kubelet[2686]: E0813 00:47:28.395856 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.396476 kubelet[2686]: E0813 00:47:28.396080 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.396476 kubelet[2686]: W0813 00:47:28.396107 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.396476 kubelet[2686]: E0813 00:47:28.396208 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.396701 kubelet[2686]: E0813 00:47:28.396603 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.396701 kubelet[2686]: W0813 00:47:28.396617 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.396701 kubelet[2686]: E0813 00:47:28.396663 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.397149 kubelet[2686]: E0813 00:47:28.397124 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.397149 kubelet[2686]: W0813 00:47:28.397143 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.397311 kubelet[2686]: E0813 00:47:28.397290 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.397662 kubelet[2686]: E0813 00:47:28.397572 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.397662 kubelet[2686]: W0813 00:47:28.397586 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.397748 kubelet[2686]: E0813 00:47:28.397714 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.398096 kubelet[2686]: E0813 00:47:28.398072 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.398096 kubelet[2686]: W0813 00:47:28.398092 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.398304 kubelet[2686]: E0813 00:47:28.398132 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.399522 kubelet[2686]: E0813 00:47:28.399462 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.399522 kubelet[2686]: W0813 00:47:28.399495 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.399522 kubelet[2686]: E0813 00:47:28.399516 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.399919 kubelet[2686]: E0813 00:47:28.399897 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.399919 kubelet[2686]: W0813 00:47:28.399918 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.400028 kubelet[2686]: E0813 00:47:28.399934 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.413048 kubelet[2686]: E0813 00:47:28.413010 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.413285 kubelet[2686]: W0813 00:47:28.413075 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.413285 kubelet[2686]: E0813 00:47:28.413105 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.493814 kubelet[2686]: E0813 00:47:28.493683 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.493814 kubelet[2686]: W0813 00:47:28.493734 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.493814 kubelet[2686]: E0813 00:47:28.493760 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.594961 kubelet[2686]: E0813 00:47:28.594615 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.594961 kubelet[2686]: W0813 00:47:28.594643 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.594961 kubelet[2686]: E0813 00:47:28.594665 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.695981 kubelet[2686]: E0813 00:47:28.695794 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.695981 kubelet[2686]: W0813 00:47:28.695848 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.695981 kubelet[2686]: E0813 00:47:28.695875 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.797137 kubelet[2686]: E0813 00:47:28.797002 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.797137 kubelet[2686]: W0813 00:47:28.797029 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.797137 kubelet[2686]: E0813 00:47:28.797062 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.878945 kubelet[2686]: E0813 00:47:28.878737 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:28.878945 kubelet[2686]: W0813 00:47:28.878772 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:28.878945 kubelet[2686]: E0813 00:47:28.878803 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:28.888783 containerd[1524]: time="2025-08-13T00:47:28.888720347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wphjn,Uid:dc095366-e92e-4b67-b559-06a12970ac23,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:28.927602 containerd[1524]: time="2025-08-13T00:47:28.927010790Z" level=info msg="connecting to shim e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd" address="unix:///run/containerd/s/30d086130b558627f815add068b911d5f8845057a66ae521ed01e41eb5a2c1c9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:28.973734 systemd[1]: Started cri-containerd-e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd.scope - libcontainer container e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd. Aug 13 00:47:29.018610 containerd[1524]: time="2025-08-13T00:47:29.018539934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wphjn,Uid:dc095366-e92e-4b67-b559-06a12970ac23,Namespace:calico-system,Attempt:0,} returns sandbox id \"e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd\"" Aug 13 00:47:29.587088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657485428.mount: Deactivated successfully. Aug 13 00:47:30.488920 containerd[1524]: time="2025-08-13T00:47:30.488863361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:30.490190 containerd[1524]: time="2025-08-13T00:47:30.490140657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 00:47:30.490812 containerd[1524]: time="2025-08-13T00:47:30.490781299Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:30.493344 containerd[1524]: time="2025-08-13T00:47:30.492709057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:30.494552 containerd[1524]: time="2025-08-13T00:47:30.493235570Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.420337472s" Aug 13 00:47:30.494552 containerd[1524]: time="2025-08-13T00:47:30.493845423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 00:47:30.496687 containerd[1524]: time="2025-08-13T00:47:30.496643845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:47:30.526880 containerd[1524]: time="2025-08-13T00:47:30.526751179Z" level=info msg="CreateContainer within sandbox \"0a5d52dc619d765833c13967348ae292729a0c25115bed4688c2d274f976401c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:47:30.546146 kubelet[2686]: E0813 00:47:30.544784 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpcdb" podUID="303998f8-9f7b-469e-a6ba-d51ab6db7a8a" Aug 13 00:47:30.561049 containerd[1524]: time="2025-08-13T00:47:30.558608214Z" level=info msg="Container a51654192b7ef072ca22f152f34f76cd7ed8ece72483e955306f1643cdfc9472: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:30.568478 containerd[1524]: time="2025-08-13T00:47:30.568381405Z" level=info msg="CreateContainer within sandbox \"0a5d52dc619d765833c13967348ae292729a0c25115bed4688c2d274f976401c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a51654192b7ef072ca22f152f34f76cd7ed8ece72483e955306f1643cdfc9472\"" Aug 13 00:47:30.575468 containerd[1524]: time="2025-08-13T00:47:30.574847626Z" level=info msg="StartContainer for \"a51654192b7ef072ca22f152f34f76cd7ed8ece72483e955306f1643cdfc9472\"" Aug 13 00:47:30.576632 containerd[1524]: time="2025-08-13T00:47:30.576595692Z" level=info msg="connecting to shim a51654192b7ef072ca22f152f34f76cd7ed8ece72483e955306f1643cdfc9472" address="unix:///run/containerd/s/3cf20b3e92f6ac6a32740a3ccca5343611032b153fc2c0feaaed7b86625c3661" protocol=ttrpc version=3 Aug 13 00:47:30.608781 systemd[1]: Started cri-containerd-a51654192b7ef072ca22f152f34f76cd7ed8ece72483e955306f1643cdfc9472.scope - libcontainer container a51654192b7ef072ca22f152f34f76cd7ed8ece72483e955306f1643cdfc9472. Aug 13 00:47:30.681168 containerd[1524]: time="2025-08-13T00:47:30.681028847Z" level=info msg="StartContainer for \"a51654192b7ef072ca22f152f34f76cd7ed8ece72483e955306f1643cdfc9472\" returns successfully" Aug 13 00:47:30.732033 kubelet[2686]: E0813 00:47:30.731972 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:30.793252 kubelet[2686]: E0813 00:47:30.792954 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.793252 kubelet[2686]: W0813 00:47:30.792981 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.793252 kubelet[2686]: E0813 00:47:30.793179 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.793977 kubelet[2686]: E0813 00:47:30.793949 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.794272 kubelet[2686]: W0813 00:47:30.794150 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.794272 kubelet[2686]: E0813 00:47:30.794174 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.794778 kubelet[2686]: E0813 00:47:30.794677 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.794960 kubelet[2686]: W0813 00:47:30.794702 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.794960 kubelet[2686]: E0813 00:47:30.794887 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.796169 kubelet[2686]: E0813 00:47:30.796063 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.796169 kubelet[2686]: W0813 00:47:30.796084 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.796169 kubelet[2686]: E0813 00:47:30.796103 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.796875 kubelet[2686]: E0813 00:47:30.796814 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.796875 kubelet[2686]: W0813 00:47:30.796828 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.796875 kubelet[2686]: E0813 00:47:30.796842 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.797465 kubelet[2686]: E0813 00:47:30.797409 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.797644 kubelet[2686]: W0813 00:47:30.797554 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.797644 kubelet[2686]: E0813 00:47:30.797570 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.798028 kubelet[2686]: E0813 00:47:30.797992 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.798028 kubelet[2686]: W0813 00:47:30.798003 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.798256 kubelet[2686]: E0813 00:47:30.798113 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.798582 kubelet[2686]: E0813 00:47:30.798534 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.798808 kubelet[2686]: W0813 00:47:30.798679 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.798808 kubelet[2686]: E0813 00:47:30.798697 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.799443 kubelet[2686]: E0813 00:47:30.799257 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.799443 kubelet[2686]: W0813 00:47:30.799274 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.799443 kubelet[2686]: E0813 00:47:30.799288 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.799988 kubelet[2686]: E0813 00:47:30.799924 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.799988 kubelet[2686]: W0813 00:47:30.799938 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.799988 kubelet[2686]: E0813 00:47:30.799949 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.800767 kubelet[2686]: E0813 00:47:30.800605 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.800767 kubelet[2686]: W0813 00:47:30.800703 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.800767 kubelet[2686]: E0813 00:47:30.800717 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.801403 kubelet[2686]: E0813 00:47:30.801235 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.801403 kubelet[2686]: W0813 00:47:30.801248 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.801403 kubelet[2686]: E0813 00:47:30.801266 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.802005 kubelet[2686]: E0813 00:47:30.801931 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.802005 kubelet[2686]: W0813 00:47:30.801944 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.802005 kubelet[2686]: E0813 00:47:30.801956 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.802513 kubelet[2686]: E0813 00:47:30.802309 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.802513 kubelet[2686]: W0813 00:47:30.802325 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.802513 kubelet[2686]: E0813 00:47:30.802338 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.803894 kubelet[2686]: E0813 00:47:30.803714 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.803894 kubelet[2686]: W0813 00:47:30.803745 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.803894 kubelet[2686]: E0813 00:47:30.803763 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.819323 kubelet[2686]: E0813 00:47:30.819293 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.819674 kubelet[2686]: W0813 00:47:30.819530 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.819674 kubelet[2686]: E0813 00:47:30.819561 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.821493 kubelet[2686]: E0813 00:47:30.821463 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.821766 kubelet[2686]: W0813 00:47:30.821634 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.821766 kubelet[2686]: E0813 00:47:30.821719 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.822213 kubelet[2686]: E0813 00:47:30.822196 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.822430 kubelet[2686]: W0813 00:47:30.822320 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.822430 kubelet[2686]: E0813 00:47:30.822351 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.822714 kubelet[2686]: E0813 00:47:30.822702 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.822776 kubelet[2686]: W0813 00:47:30.822766 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.822836 kubelet[2686]: E0813 00:47:30.822826 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.823518 kubelet[2686]: E0813 00:47:30.823497 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.823518 kubelet[2686]: W0813 00:47:30.823514 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.823629 kubelet[2686]: E0813 00:47:30.823540 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.824669 kubelet[2686]: E0813 00:47:30.824647 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.824669 kubelet[2686]: W0813 00:47:30.824666 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.824816 kubelet[2686]: E0813 00:47:30.824745 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.824989 kubelet[2686]: E0813 00:47:30.824973 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.825036 kubelet[2686]: W0813 00:47:30.824993 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.825172 kubelet[2686]: E0813 00:47:30.825085 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.825221 kubelet[2686]: E0813 00:47:30.825209 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.825307 kubelet[2686]: W0813 00:47:30.825221 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.825378 kubelet[2686]: E0813 00:47:30.825257 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.826260 kubelet[2686]: E0813 00:47:30.826240 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.826260 kubelet[2686]: W0813 00:47:30.826258 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.826405 kubelet[2686]: E0813 00:47:30.826281 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.826512 kubelet[2686]: E0813 00:47:30.826498 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.826512 kubelet[2686]: W0813 00:47:30.826509 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.826628 kubelet[2686]: E0813 00:47:30.826564 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.826671 kubelet[2686]: E0813 00:47:30.826657 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.826671 kubelet[2686]: W0813 00:47:30.826667 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.826766 kubelet[2686]: E0813 00:47:30.826685 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.827462 kubelet[2686]: E0813 00:47:30.827445 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.827462 kubelet[2686]: W0813 00:47:30.827459 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.828035 kubelet[2686]: E0813 00:47:30.827635 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.828301 kubelet[2686]: E0813 00:47:30.828285 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.828301 kubelet[2686]: W0813 00:47:30.828300 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.828381 kubelet[2686]: E0813 00:47:30.828317 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.829591 kubelet[2686]: E0813 00:47:30.829568 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.829591 kubelet[2686]: W0813 00:47:30.829583 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.829814 kubelet[2686]: E0813 00:47:30.829651 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.829814 kubelet[2686]: E0813 00:47:30.829739 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.829814 kubelet[2686]: W0813 00:47:30.829748 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.829929 kubelet[2686]: E0813 00:47:30.829871 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.829929 kubelet[2686]: W0813 00:47:30.829877 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.829929 kubelet[2686]: E0813 00:47:30.829887 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.829929 kubelet[2686]: E0813 00:47:30.829913 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.830350 kubelet[2686]: E0813 00:47:30.830073 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.830350 kubelet[2686]: W0813 00:47:30.830079 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.830350 kubelet[2686]: E0813 00:47:30.830087 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:30.831566 kubelet[2686]: E0813 00:47:30.830506 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:30.831566 kubelet[2686]: W0813 00:47:30.830516 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:30.831566 kubelet[2686]: E0813 00:47:30.830528 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.740898 kubelet[2686]: I0813 00:47:31.740817 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:47:31.741329 kubelet[2686]: E0813 00:47:31.741290 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:31.811137 kubelet[2686]: E0813 00:47:31.810906 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.811137 kubelet[2686]: W0813 00:47:31.810965 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.811137 kubelet[2686]: E0813 00:47:31.811005 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.811829 kubelet[2686]: E0813 00:47:31.811528 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.811829 kubelet[2686]: W0813 00:47:31.811550 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.811829 kubelet[2686]: E0813 00:47:31.811570 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.812148 kubelet[2686]: E0813 00:47:31.812117 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.812594 kubelet[2686]: W0813 00:47:31.812391 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.812594 kubelet[2686]: E0813 00:47:31.812447 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.813080 kubelet[2686]: E0813 00:47:31.812848 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.813080 kubelet[2686]: W0813 00:47:31.812882 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.813080 kubelet[2686]: E0813 00:47:31.812900 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.813537 kubelet[2686]: E0813 00:47:31.813497 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.813743 kubelet[2686]: W0813 00:47:31.813620 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.813743 kubelet[2686]: E0813 00:47:31.813641 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.814443 kubelet[2686]: E0813 00:47:31.814263 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.814443 kubelet[2686]: W0813 00:47:31.814279 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.814443 kubelet[2686]: E0813 00:47:31.814303 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.814709 kubelet[2686]: E0813 00:47:31.814677 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.814938 kubelet[2686]: W0813 00:47:31.814802 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.814938 kubelet[2686]: E0813 00:47:31.814825 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.815230 kubelet[2686]: E0813 00:47:31.815180 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.815230 kubelet[2686]: W0813 00:47:31.815195 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.815527 kubelet[2686]: E0813 00:47:31.815211 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.815724 kubelet[2686]: E0813 00:47:31.815642 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.815988 kubelet[2686]: W0813 00:47:31.815762 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.815988 kubelet[2686]: E0813 00:47:31.815782 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.816429 kubelet[2686]: E0813 00:47:31.816289 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.816429 kubelet[2686]: W0813 00:47:31.816305 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.816429 kubelet[2686]: E0813 00:47:31.816321 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.816825 kubelet[2686]: E0813 00:47:31.816678 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.816825 kubelet[2686]: W0813 00:47:31.816695 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.816825 kubelet[2686]: E0813 00:47:31.816710 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.817324 kubelet[2686]: E0813 00:47:31.817140 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.817324 kubelet[2686]: W0813 00:47:31.817157 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.817324 kubelet[2686]: E0813 00:47:31.817172 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.817711 kubelet[2686]: E0813 00:47:31.817657 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.817711 kubelet[2686]: W0813 00:47:31.817673 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.817997 kubelet[2686]: E0813 00:47:31.817689 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.818228 kubelet[2686]: E0813 00:47:31.818158 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.818386 kubelet[2686]: W0813 00:47:31.818173 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.818563 kubelet[2686]: E0813 00:47:31.818489 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.818937 kubelet[2686]: E0813 00:47:31.818922 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.819147 kubelet[2686]: W0813 00:47:31.818991 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.819147 kubelet[2686]: E0813 00:47:31.819011 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.827683 containerd[1524]: time="2025-08-13T00:47:31.827608930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:31.828928 containerd[1524]: time="2025-08-13T00:47:31.828859192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 00:47:31.829553 containerd[1524]: time="2025-08-13T00:47:31.829503523Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:31.830667 kubelet[2686]: E0813 00:47:31.830575 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.830667 kubelet[2686]: W0813 00:47:31.830600 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.830667 kubelet[2686]: E0813 00:47:31.830623 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.831338 kubelet[2686]: E0813 00:47:31.830980 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.831338 kubelet[2686]: W0813 00:47:31.831001 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.831338 kubelet[2686]: E0813 00:47:31.831053 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.831930 kubelet[2686]: E0813 00:47:31.831907 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.831930 kubelet[2686]: W0813 00:47:31.831924 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.832025 kubelet[2686]: E0813 00:47:31.831942 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.832445 containerd[1524]: time="2025-08-13T00:47:31.832180636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:31.832514 kubelet[2686]: E0813 00:47:31.832313 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.832514 kubelet[2686]: W0813 00:47:31.832323 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.832514 kubelet[2686]: E0813 00:47:31.832350 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.832602 kubelet[2686]: E0813 00:47:31.832551 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.832602 kubelet[2686]: W0813 00:47:31.832559 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.832602 kubelet[2686]: E0813 00:47:31.832581 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.833470 containerd[1524]: time="2025-08-13T00:47:31.833289210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.336603564s" Aug 13 00:47:31.833470 containerd[1524]: time="2025-08-13T00:47:31.833334233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 00:47:31.833615 kubelet[2686]: E0813 00:47:31.833508 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.833615 kubelet[2686]: W0813 00:47:31.833521 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.833615 kubelet[2686]: E0813 00:47:31.833549 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.834081 kubelet[2686]: E0813 00:47:31.833799 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.834081 kubelet[2686]: W0813 00:47:31.833807 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.834081 kubelet[2686]: E0813 00:47:31.833818 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.834430 kubelet[2686]: E0813 00:47:31.834140 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.834430 kubelet[2686]: W0813 00:47:31.834149 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.834430 kubelet[2686]: E0813 00:47:31.834167 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.834430 kubelet[2686]: E0813 00:47:31.834395 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.834430 kubelet[2686]: W0813 00:47:31.834407 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.834934 kubelet[2686]: E0813 00:47:31.834503 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.834934 kubelet[2686]: E0813 00:47:31.834783 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.834934 kubelet[2686]: W0813 00:47:31.834792 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.834934 kubelet[2686]: E0813 00:47:31.834804 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.835096 kubelet[2686]: E0813 00:47:31.835012 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.835096 kubelet[2686]: W0813 00:47:31.835022 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.835096 kubelet[2686]: E0813 00:47:31.835036 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.836211 kubelet[2686]: E0813 00:47:31.836117 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.836211 kubelet[2686]: W0813 00:47:31.836135 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.836211 kubelet[2686]: E0813 00:47:31.836155 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.837699 kubelet[2686]: E0813 00:47:31.836680 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.837699 kubelet[2686]: W0813 00:47:31.836691 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.837699 kubelet[2686]: E0813 00:47:31.836703 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.837699 kubelet[2686]: E0813 00:47:31.836856 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.837699 kubelet[2686]: W0813 00:47:31.836863 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.837699 kubelet[2686]: E0813 00:47:31.836871 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.837699 kubelet[2686]: E0813 00:47:31.837025 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.837699 kubelet[2686]: W0813 00:47:31.837032 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.837699 kubelet[2686]: E0813 00:47:31.837040 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.837699 kubelet[2686]: E0813 00:47:31.837193 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.837941 kubelet[2686]: W0813 00:47:31.837199 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.837941 kubelet[2686]: E0813 00:47:31.837207 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.837941 kubelet[2686]: E0813 00:47:31.837560 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.837941 kubelet[2686]: W0813 00:47:31.837569 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.837941 kubelet[2686]: E0813 00:47:31.837578 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.837941 kubelet[2686]: E0813 00:47:31.837728 2686 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:47:31.837941 kubelet[2686]: W0813 00:47:31.837735 2686 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:47:31.837941 kubelet[2686]: E0813 00:47:31.837743 2686 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:47:31.841440 containerd[1524]: time="2025-08-13T00:47:31.841308872Z" level=info msg="CreateContainer within sandbox \"e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:47:31.849445 containerd[1524]: time="2025-08-13T00:47:31.849117340Z" level=info msg="Container 1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:31.874603 containerd[1524]: time="2025-08-13T00:47:31.874545892Z" level=info msg="CreateContainer within sandbox \"e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238\"" Aug 13 00:47:31.876472 containerd[1524]: time="2025-08-13T00:47:31.875469346Z" level=info msg="StartContainer for \"1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238\"" Aug 13 00:47:31.878728 containerd[1524]: time="2025-08-13T00:47:31.878681569Z" level=info msg="connecting to shim 1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238" address="unix:///run/containerd/s/30d086130b558627f815add068b911d5f8845057a66ae521ed01e41eb5a2c1c9" protocol=ttrpc version=3 Aug 13 00:47:31.922961 systemd[1]: Started cri-containerd-1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238.scope - libcontainer container 1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238. Aug 13 00:47:32.014580 containerd[1524]: time="2025-08-13T00:47:32.012073579Z" level=info msg="StartContainer for \"1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238\" returns successfully" Aug 13 00:47:32.028459 systemd[1]: cri-containerd-1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238.scope: Deactivated successfully. Aug 13 00:47:32.075049 containerd[1524]: time="2025-08-13T00:47:32.073778869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238\" id:\"1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238\" pid:3399 exited_at:{seconds:1755046052 nanos:31352383}" Aug 13 00:47:32.087408 containerd[1524]: time="2025-08-13T00:47:32.087338290Z" level=info msg="received exit event container_id:\"1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238\" id:\"1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238\" pid:3399 exited_at:{seconds:1755046052 nanos:31352383}" Aug 13 00:47:32.135674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c05756ac68a195465f04df2fd8b00f0567e314d471a2bfd8a06ea7feeac4238-rootfs.mount: Deactivated successfully. Aug 13 00:47:32.540773 kubelet[2686]: E0813 00:47:32.540656 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpcdb" podUID="303998f8-9f7b-469e-a6ba-d51ab6db7a8a" Aug 13 00:47:32.749449 containerd[1524]: time="2025-08-13T00:47:32.749290181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:47:32.770078 kubelet[2686]: I0813 00:47:32.768927 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6445687d55-phnhc" podStartSLOduration=3.337733765 podStartE2EDuration="5.768904359s" podCreationTimestamp="2025-08-13 00:47:27 +0000 UTC" firstStartedPulling="2025-08-13 00:47:28.064884305 +0000 UTC m=+23.695102760" lastFinishedPulling="2025-08-13 00:47:30.49605491 +0000 UTC m=+26.126273354" observedRunningTime="2025-08-13 00:47:30.755063071 +0000 UTC m=+26.385281526" watchObservedRunningTime="2025-08-13 00:47:32.768904359 +0000 UTC m=+28.399122813" Aug 13 00:47:34.541928 kubelet[2686]: E0813 00:47:34.541129 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpcdb" podUID="303998f8-9f7b-469e-a6ba-d51ab6db7a8a" Aug 13 00:47:36.541127 kubelet[2686]: E0813 00:47:36.539932 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpcdb" podUID="303998f8-9f7b-469e-a6ba-d51ab6db7a8a" Aug 13 00:47:36.773393 containerd[1524]: time="2025-08-13T00:47:36.773334212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:36.774622 containerd[1524]: time="2025-08-13T00:47:36.774336126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 00:47:36.775990 containerd[1524]: time="2025-08-13T00:47:36.775933018Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:36.790206 containerd[1524]: time="2025-08-13T00:47:36.790131218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:36.790961 containerd[1524]: time="2025-08-13T00:47:36.790649770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.041118659s" Aug 13 00:47:36.790961 containerd[1524]: time="2025-08-13T00:47:36.790693169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 00:47:36.793858 containerd[1524]: time="2025-08-13T00:47:36.793693508Z" level=info msg="CreateContainer within sandbox \"e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:47:36.804446 containerd[1524]: time="2025-08-13T00:47:36.803633293Z" level=info msg="Container 49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:36.807455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2523125722.mount: Deactivated successfully. Aug 13 00:47:36.817212 containerd[1524]: time="2025-08-13T00:47:36.817131048Z" level=info msg="CreateContainer within sandbox \"e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a\"" Aug 13 00:47:36.818230 containerd[1524]: time="2025-08-13T00:47:36.818166401Z" level=info msg="StartContainer for \"49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a\"" Aug 13 00:47:36.819766 containerd[1524]: time="2025-08-13T00:47:36.819645724Z" level=info msg="connecting to shim 49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a" address="unix:///run/containerd/s/30d086130b558627f815add068b911d5f8845057a66ae521ed01e41eb5a2c1c9" protocol=ttrpc version=3 Aug 13 00:47:36.852006 systemd[1]: Started cri-containerd-49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a.scope - libcontainer container 49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a. Aug 13 00:47:36.910509 containerd[1524]: time="2025-08-13T00:47:36.910458361Z" level=info msg="StartContainer for \"49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a\" returns successfully" Aug 13 00:47:37.502278 systemd[1]: cri-containerd-49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a.scope: Deactivated successfully. Aug 13 00:47:37.502710 systemd[1]: cri-containerd-49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a.scope: Consumed 638ms CPU time, 166.4M memory peak, 13.4M read from disk, 171.2M written to disk. Aug 13 00:47:37.503970 containerd[1524]: time="2025-08-13T00:47:37.503851646Z" level=info msg="received exit event container_id:\"49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a\" id:\"49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a\" pid:3459 exited_at:{seconds:1755046057 nanos:503453537}" Aug 13 00:47:37.517446 containerd[1524]: time="2025-08-13T00:47:37.516908575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a\" id:\"49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a\" pid:3459 exited_at:{seconds:1755046057 nanos:503453537}" Aug 13 00:47:37.574941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49ff3b26a1c1ea75c9d92ab4de1143dc2042642d9704508d360deb32f6444c7a-rootfs.mount: Deactivated successfully. Aug 13 00:47:37.581058 kubelet[2686]: I0813 00:47:37.579272 2686 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:47:37.675108 systemd[1]: Created slice kubepods-burstable-pod339e4b53_06d2_453f_8da7_d9d17cfe4e69.slice - libcontainer container kubepods-burstable-pod339e4b53_06d2_453f_8da7_d9d17cfe4e69.slice. Aug 13 00:47:37.688145 kubelet[2686]: I0813 00:47:37.687941 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3aee0c3-d22c-4923-83d0-801fd3bfb671-config-volume\") pod \"coredns-7c65d6cfc9-2fk4r\" (UID: \"b3aee0c3-d22c-4923-83d0-801fd3bfb671\") " pod="kube-system/coredns-7c65d6cfc9-2fk4r" Aug 13 00:47:37.689258 kubelet[2686]: I0813 00:47:37.689206 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8n8b\" (UniqueName: \"kubernetes.io/projected/3084cab9-6178-4e56-8e07-63fed0e9eb28-kube-api-access-k8n8b\") pod \"whisker-549b77d788-zx6tb\" (UID: \"3084cab9-6178-4e56-8e07-63fed0e9eb28\") " pod="calico-system/whisker-549b77d788-zx6tb" Aug 13 00:47:37.690842 kubelet[2686]: I0813 00:47:37.690809 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bb91506-e966-4bb2-9481-a569c1240861-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-zjj9v\" (UID: \"4bb91506-e966-4bb2-9481-a569c1240861\") " pod="calico-system/goldmane-58fd7646b9-zjj9v" Aug 13 00:47:37.691059 kubelet[2686]: I0813 00:47:37.691037 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpvnr\" (UniqueName: \"kubernetes.io/projected/52260ea3-c039-482f-aa45-c9fe8097193a-kube-api-access-bpvnr\") pod \"calico-apiserver-9795f7446-skblz\" (UID: \"52260ea3-c039-482f-aa45-c9fe8097193a\") " pod="calico-apiserver/calico-apiserver-9795f7446-skblz" Aug 13 00:47:37.691780 kubelet[2686]: I0813 00:47:37.691568 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bb91506-e966-4bb2-9481-a569c1240861-config\") pod \"goldmane-58fd7646b9-zjj9v\" (UID: \"4bb91506-e966-4bb2-9481-a569c1240861\") " pod="calico-system/goldmane-58fd7646b9-zjj9v" Aug 13 00:47:37.691780 kubelet[2686]: I0813 00:47:37.691626 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52260ea3-c039-482f-aa45-c9fe8097193a-calico-apiserver-certs\") pod \"calico-apiserver-9795f7446-skblz\" (UID: \"52260ea3-c039-482f-aa45-c9fe8097193a\") " pod="calico-apiserver/calico-apiserver-9795f7446-skblz" Aug 13 00:47:37.691780 kubelet[2686]: I0813 00:47:37.691660 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3084cab9-6178-4e56-8e07-63fed0e9eb28-whisker-ca-bundle\") pod \"whisker-549b77d788-zx6tb\" (UID: \"3084cab9-6178-4e56-8e07-63fed0e9eb28\") " pod="calico-system/whisker-549b77d788-zx6tb" Aug 13 00:47:37.691780 kubelet[2686]: I0813 00:47:37.691688 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv65s\" (UniqueName: \"kubernetes.io/projected/f5957a81-5c65-4759-b9bd-1931ea6ed193-kube-api-access-kv65s\") pod \"calico-apiserver-9795f7446-8x9gq\" (UID: \"f5957a81-5c65-4759-b9bd-1931ea6ed193\") " pod="calico-apiserver/calico-apiserver-9795f7446-8x9gq" Aug 13 00:47:37.691780 kubelet[2686]: I0813 00:47:37.691713 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwxmm\" (UniqueName: \"kubernetes.io/projected/8bce8461-34c3-42fd-82fd-7820d68ff1b5-kube-api-access-zwxmm\") pod \"calico-kube-controllers-59b7c9845d-k54mx\" (UID: \"8bce8461-34c3-42fd-82fd-7820d68ff1b5\") " pod="calico-system/calico-kube-controllers-59b7c9845d-k54mx" Aug 13 00:47:37.692028 kubelet[2686]: I0813 00:47:37.691744 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4bb91506-e966-4bb2-9481-a569c1240861-goldmane-key-pair\") pod \"goldmane-58fd7646b9-zjj9v\" (UID: \"4bb91506-e966-4bb2-9481-a569c1240861\") " pod="calico-system/goldmane-58fd7646b9-zjj9v" Aug 13 00:47:37.692078 kubelet[2686]: I0813 00:47:37.692012 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3084cab9-6178-4e56-8e07-63fed0e9eb28-whisker-backend-key-pair\") pod \"whisker-549b77d788-zx6tb\" (UID: \"3084cab9-6178-4e56-8e07-63fed0e9eb28\") " pod="calico-system/whisker-549b77d788-zx6tb" Aug 13 00:47:37.693835 kubelet[2686]: I0813 00:47:37.693583 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bce8461-34c3-42fd-82fd-7820d68ff1b5-tigera-ca-bundle\") pod \"calico-kube-controllers-59b7c9845d-k54mx\" (UID: \"8bce8461-34c3-42fd-82fd-7820d68ff1b5\") " pod="calico-system/calico-kube-controllers-59b7c9845d-k54mx" Aug 13 00:47:37.693835 kubelet[2686]: I0813 00:47:37.693647 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg84q\" (UniqueName: \"kubernetes.io/projected/4bb91506-e966-4bb2-9481-a569c1240861-kube-api-access-kg84q\") pod \"goldmane-58fd7646b9-zjj9v\" (UID: \"4bb91506-e966-4bb2-9481-a569c1240861\") " pod="calico-system/goldmane-58fd7646b9-zjj9v" Aug 13 00:47:37.693835 kubelet[2686]: I0813 00:47:37.693693 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f5957a81-5c65-4759-b9bd-1931ea6ed193-calico-apiserver-certs\") pod \"calico-apiserver-9795f7446-8x9gq\" (UID: \"f5957a81-5c65-4759-b9bd-1931ea6ed193\") " pod="calico-apiserver/calico-apiserver-9795f7446-8x9gq" Aug 13 00:47:37.693835 kubelet[2686]: I0813 00:47:37.693735 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8d29\" (UniqueName: \"kubernetes.io/projected/339e4b53-06d2-453f-8da7-d9d17cfe4e69-kube-api-access-r8d29\") pod \"coredns-7c65d6cfc9-vpwj9\" (UID: \"339e4b53-06d2-453f-8da7-d9d17cfe4e69\") " pod="kube-system/coredns-7c65d6cfc9-vpwj9" Aug 13 00:47:37.693835 kubelet[2686]: I0813 00:47:37.693802 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psqm8\" (UniqueName: \"kubernetes.io/projected/b3aee0c3-d22c-4923-83d0-801fd3bfb671-kube-api-access-psqm8\") pod \"coredns-7c65d6cfc9-2fk4r\" (UID: \"b3aee0c3-d22c-4923-83d0-801fd3bfb671\") " pod="kube-system/coredns-7c65d6cfc9-2fk4r" Aug 13 00:47:37.694223 kubelet[2686]: I0813 00:47:37.693849 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/339e4b53-06d2-453f-8da7-d9d17cfe4e69-config-volume\") pod \"coredns-7c65d6cfc9-vpwj9\" (UID: \"339e4b53-06d2-453f-8da7-d9d17cfe4e69\") " pod="kube-system/coredns-7c65d6cfc9-vpwj9" Aug 13 00:47:37.695804 kubelet[2686]: W0813 00:47:37.695621 2686 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:37.695804 kubelet[2686]: E0813 00:47:37.695684 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:37.695975 kubelet[2686]: W0813 00:47:37.695857 2686 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:37.695975 kubelet[2686]: E0813 00:47:37.695878 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:37.695975 kubelet[2686]: W0813 00:47:37.695951 2686 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:37.695975 kubelet[2686]: E0813 00:47:37.695966 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:37.698162 systemd[1]: Created slice kubepods-burstable-podb3aee0c3_d22c_4923_83d0_801fd3bfb671.slice - libcontainer container kubepods-burstable-podb3aee0c3_d22c_4923_83d0_801fd3bfb671.slice. Aug 13 00:47:37.699753 kubelet[2686]: W0813 00:47:37.696540 2686 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:37.699753 kubelet[2686]: E0813 00:47:37.698605 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:37.699753 kubelet[2686]: W0813 00:47:37.696681 2686 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:37.699753 kubelet[2686]: E0813 00:47:37.698644 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:37.699753 kubelet[2686]: W0813 00:47:37.696723 2686 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:37.700010 kubelet[2686]: E0813 00:47:37.698671 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:37.700010 kubelet[2686]: W0813 00:47:37.696766 2686 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4372.1.0-a-508df13d84" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object Aug 13 00:47:37.700010 kubelet[2686]: E0813 00:47:37.698696 2686 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4372.1.0-a-508df13d84\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4372.1.0-a-508df13d84' and this object" logger="UnhandledError" Aug 13 00:47:37.714265 systemd[1]: Created slice kubepods-besteffort-pod8bce8461_34c3_42fd_82fd_7820d68ff1b5.slice - libcontainer container kubepods-besteffort-pod8bce8461_34c3_42fd_82fd_7820d68ff1b5.slice. Aug 13 00:47:37.739884 systemd[1]: Created slice kubepods-besteffort-pod3084cab9_6178_4e56_8e07_63fed0e9eb28.slice - libcontainer container kubepods-besteffort-pod3084cab9_6178_4e56_8e07_63fed0e9eb28.slice. Aug 13 00:47:37.756138 systemd[1]: Created slice kubepods-besteffort-pod52260ea3_c039_482f_aa45_c9fe8097193a.slice - libcontainer container kubepods-besteffort-pod52260ea3_c039_482f_aa45_c9fe8097193a.slice. Aug 13 00:47:37.776658 systemd[1]: Created slice kubepods-besteffort-pod4bb91506_e966_4bb2_9481_a569c1240861.slice - libcontainer container kubepods-besteffort-pod4bb91506_e966_4bb2_9481_a569c1240861.slice. Aug 13 00:47:37.788839 systemd[1]: Created slice kubepods-besteffort-podf5957a81_5c65_4759_b9bd_1931ea6ed193.slice - libcontainer container kubepods-besteffort-podf5957a81_5c65_4759_b9bd_1931ea6ed193.slice. Aug 13 00:47:37.794302 containerd[1524]: time="2025-08-13T00:47:37.794230979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:47:37.984084 kubelet[2686]: E0813 00:47:37.984042 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:37.985864 containerd[1524]: time="2025-08-13T00:47:37.985813593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vpwj9,Uid:339e4b53-06d2-453f-8da7-d9d17cfe4e69,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:38.014574 kubelet[2686]: E0813 00:47:38.013986 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:38.016951 containerd[1524]: time="2025-08-13T00:47:38.016053140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2fk4r,Uid:b3aee0c3-d22c-4923-83d0-801fd3bfb671,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:38.042190 containerd[1524]: time="2025-08-13T00:47:38.042101304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b7c9845d-k54mx,Uid:8bce8461-34c3-42fd-82fd-7820d68ff1b5,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:38.235379 containerd[1524]: time="2025-08-13T00:47:38.235250208Z" level=error msg="Failed to destroy network for sandbox \"ac0bec00c779405de4a9bda95aca266ef7f55a9237c9ce62905d531be246d454\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.238257 containerd[1524]: time="2025-08-13T00:47:38.237146180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2fk4r,Uid:b3aee0c3-d22c-4923-83d0-801fd3bfb671,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0bec00c779405de4a9bda95aca266ef7f55a9237c9ce62905d531be246d454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.241666 kubelet[2686]: E0813 00:47:38.241595 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0bec00c779405de4a9bda95aca266ef7f55a9237c9ce62905d531be246d454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.242857 kubelet[2686]: E0813 00:47:38.242236 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0bec00c779405de4a9bda95aca266ef7f55a9237c9ce62905d531be246d454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2fk4r" Aug 13 00:47:38.242857 kubelet[2686]: E0813 00:47:38.242294 2686 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac0bec00c779405de4a9bda95aca266ef7f55a9237c9ce62905d531be246d454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-2fk4r" Aug 13 00:47:38.243086 containerd[1524]: time="2025-08-13T00:47:38.242644630Z" level=error msg="Failed to destroy network for sandbox \"1686e0d9175e36448699941d9eddb740535bd01b64321fdbb12c9d8eda0aa88f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.243751 kubelet[2686]: E0813 00:47:38.242825 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-2fk4r_kube-system(b3aee0c3-d22c-4923-83d0-801fd3bfb671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-2fk4r_kube-system(b3aee0c3-d22c-4923-83d0-801fd3bfb671)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac0bec00c779405de4a9bda95aca266ef7f55a9237c9ce62905d531be246d454\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-2fk4r" podUID="b3aee0c3-d22c-4923-83d0-801fd3bfb671" Aug 13 00:47:38.245112 containerd[1524]: time="2025-08-13T00:47:38.244877638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b7c9845d-k54mx,Uid:8bce8461-34c3-42fd-82fd-7820d68ff1b5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1686e0d9175e36448699941d9eddb740535bd01b64321fdbb12c9d8eda0aa88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.247994 kubelet[2686]: E0813 00:47:38.245292 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1686e0d9175e36448699941d9eddb740535bd01b64321fdbb12c9d8eda0aa88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.247994 kubelet[2686]: E0813 00:47:38.245356 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1686e0d9175e36448699941d9eddb740535bd01b64321fdbb12c9d8eda0aa88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59b7c9845d-k54mx" Aug 13 00:47:38.247994 kubelet[2686]: E0813 00:47:38.245375 2686 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1686e0d9175e36448699941d9eddb740535bd01b64321fdbb12c9d8eda0aa88f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59b7c9845d-k54mx" Aug 13 00:47:38.249111 containerd[1524]: time="2025-08-13T00:47:38.246373806Z" level=error msg="Failed to destroy network for sandbox \"25278d9c3e49cab8777a96c14d9ebe1aa89cbce22631b520aa973f2743c95f62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.249111 containerd[1524]: time="2025-08-13T00:47:38.247403909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vpwj9,Uid:339e4b53-06d2-453f-8da7-d9d17cfe4e69,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25278d9c3e49cab8777a96c14d9ebe1aa89cbce22631b520aa973f2743c95f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.249230 kubelet[2686]: E0813 00:47:38.245710 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59b7c9845d-k54mx_calico-system(8bce8461-34c3-42fd-82fd-7820d68ff1b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59b7c9845d-k54mx_calico-system(8bce8461-34c3-42fd-82fd-7820d68ff1b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1686e0d9175e36448699941d9eddb740535bd01b64321fdbb12c9d8eda0aa88f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59b7c9845d-k54mx" podUID="8bce8461-34c3-42fd-82fd-7820d68ff1b5" Aug 13 00:47:38.249230 kubelet[2686]: E0813 00:47:38.248592 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25278d9c3e49cab8777a96c14d9ebe1aa89cbce22631b520aa973f2743c95f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.249230 kubelet[2686]: E0813 00:47:38.248657 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25278d9c3e49cab8777a96c14d9ebe1aa89cbce22631b520aa973f2743c95f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vpwj9" Aug 13 00:47:38.249333 kubelet[2686]: E0813 00:47:38.248683 2686 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25278d9c3e49cab8777a96c14d9ebe1aa89cbce22631b520aa973f2743c95f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vpwj9" Aug 13 00:47:38.249333 kubelet[2686]: E0813 00:47:38.248736 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vpwj9_kube-system(339e4b53-06d2-453f-8da7-d9d17cfe4e69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vpwj9_kube-system(339e4b53-06d2-453f-8da7-d9d17cfe4e69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25278d9c3e49cab8777a96c14d9ebe1aa89cbce22631b520aa973f2743c95f62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vpwj9" podUID="339e4b53-06d2-453f-8da7-d9d17cfe4e69" Aug 13 00:47:38.549019 systemd[1]: Created slice kubepods-besteffort-pod303998f8_9f7b_469e_a6ba_d51ab6db7a8a.slice - libcontainer container kubepods-besteffort-pod303998f8_9f7b_469e_a6ba_d51ab6db7a8a.slice. Aug 13 00:47:38.553267 containerd[1524]: time="2025-08-13T00:47:38.553191565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpcdb,Uid:303998f8-9f7b-469e-a6ba-d51ab6db7a8a,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:38.639159 containerd[1524]: time="2025-08-13T00:47:38.639008125Z" level=error msg="Failed to destroy network for sandbox \"991e9f98f6bead173ba106e83da036734a95d86c47dada0c8b9694c15b82799a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.640321 containerd[1524]: time="2025-08-13T00:47:38.640267065Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpcdb,Uid:303998f8-9f7b-469e-a6ba-d51ab6db7a8a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"991e9f98f6bead173ba106e83da036734a95d86c47dada0c8b9694c15b82799a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.640648 kubelet[2686]: E0813 00:47:38.640602 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"991e9f98f6bead173ba106e83da036734a95d86c47dada0c8b9694c15b82799a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:38.641006 kubelet[2686]: E0813 00:47:38.640685 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"991e9f98f6bead173ba106e83da036734a95d86c47dada0c8b9694c15b82799a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tpcdb" Aug 13 00:47:38.641006 kubelet[2686]: E0813 00:47:38.640713 2686 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"991e9f98f6bead173ba106e83da036734a95d86c47dada0c8b9694c15b82799a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tpcdb" Aug 13 00:47:38.641006 kubelet[2686]: E0813 00:47:38.640804 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tpcdb_calico-system(303998f8-9f7b-469e-a6ba-d51ab6db7a8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tpcdb_calico-system(303998f8-9f7b-469e-a6ba-d51ab6db7a8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"991e9f98f6bead173ba106e83da036734a95d86c47dada0c8b9694c15b82799a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tpcdb" podUID="303998f8-9f7b-469e-a6ba-d51ab6db7a8a" Aug 13 00:47:38.803311 kubelet[2686]: E0813 00:47:38.802876 2686 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:47:38.803311 kubelet[2686]: E0813 00:47:38.802988 2686 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4bb91506-e966-4bb2-9481-a569c1240861-config podName:4bb91506-e966-4bb2-9481-a569c1240861 nodeName:}" failed. No retries permitted until 2025-08-13 00:47:39.302962322 +0000 UTC m=+34.933180774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4bb91506-e966-4bb2-9481-a569c1240861-config") pod "goldmane-58fd7646b9-zjj9v" (UID: "4bb91506-e966-4bb2-9481-a569c1240861") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:47:38.844178 systemd[1]: run-netns-cni\x2d8d63791d\x2d9c1e\x2d5dad\x2d3293\x2dc1a70d0435e3.mount: Deactivated successfully. Aug 13 00:47:38.949558 containerd[1524]: time="2025-08-13T00:47:38.949484409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-549b77d788-zx6tb,Uid:3084cab9-6178-4e56-8e07-63fed0e9eb28,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:38.969797 containerd[1524]: time="2025-08-13T00:47:38.969606699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9795f7446-skblz,Uid:52260ea3-c039-482f-aa45-c9fe8097193a,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:47:38.999915 containerd[1524]: time="2025-08-13T00:47:38.999847408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9795f7446-8x9gq,Uid:f5957a81-5c65-4759-b9bd-1931ea6ed193,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:47:39.114511 containerd[1524]: time="2025-08-13T00:47:39.114235951Z" level=error msg="Failed to destroy network for sandbox \"95b3285c229e71b715980297ec1e90653fc2583327f324e7f0d998d4b97009a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.117614 containerd[1524]: time="2025-08-13T00:47:39.116761935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-549b77d788-zx6tb,Uid:3084cab9-6178-4e56-8e07-63fed0e9eb28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b3285c229e71b715980297ec1e90653fc2583327f324e7f0d998d4b97009a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.117783 kubelet[2686]: E0813 00:47:39.117633 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b3285c229e71b715980297ec1e90653fc2583327f324e7f0d998d4b97009a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.117783 kubelet[2686]: E0813 00:47:39.117703 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b3285c229e71b715980297ec1e90653fc2583327f324e7f0d998d4b97009a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-549b77d788-zx6tb" Aug 13 00:47:39.117783 kubelet[2686]: E0813 00:47:39.117725 2686 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95b3285c229e71b715980297ec1e90653fc2583327f324e7f0d998d4b97009a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-549b77d788-zx6tb" Aug 13 00:47:39.117925 kubelet[2686]: E0813 00:47:39.117775 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-549b77d788-zx6tb_calico-system(3084cab9-6178-4e56-8e07-63fed0e9eb28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-549b77d788-zx6tb_calico-system(3084cab9-6178-4e56-8e07-63fed0e9eb28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95b3285c229e71b715980297ec1e90653fc2583327f324e7f0d998d4b97009a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-549b77d788-zx6tb" podUID="3084cab9-6178-4e56-8e07-63fed0e9eb28" Aug 13 00:47:39.120639 systemd[1]: run-netns-cni\x2db6fd908c\x2d44a6\x2d6e4a\x2d4913\x2d0d35d8f6f867.mount: Deactivated successfully. Aug 13 00:47:39.134009 containerd[1524]: time="2025-08-13T00:47:39.133853741Z" level=error msg="Failed to destroy network for sandbox \"ce3ae041fb7539cdeb8695c5d7ba85c4ba8cc37d3cb41d7bb3c6070fdd20bf2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.135578 containerd[1524]: time="2025-08-13T00:47:39.135511962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9795f7446-skblz,Uid:52260ea3-c039-482f-aa45-c9fe8097193a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce3ae041fb7539cdeb8695c5d7ba85c4ba8cc37d3cb41d7bb3c6070fdd20bf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.137548 kubelet[2686]: E0813 00:47:39.135985 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce3ae041fb7539cdeb8695c5d7ba85c4ba8cc37d3cb41d7bb3c6070fdd20bf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.137548 kubelet[2686]: E0813 00:47:39.136055 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce3ae041fb7539cdeb8695c5d7ba85c4ba8cc37d3cb41d7bb3c6070fdd20bf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9795f7446-skblz" Aug 13 00:47:39.137548 kubelet[2686]: E0813 00:47:39.136075 2686 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce3ae041fb7539cdeb8695c5d7ba85c4ba8cc37d3cb41d7bb3c6070fdd20bf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9795f7446-skblz" Aug 13 00:47:39.137715 kubelet[2686]: E0813 00:47:39.136116 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9795f7446-skblz_calico-apiserver(52260ea3-c039-482f-aa45-c9fe8097193a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9795f7446-skblz_calico-apiserver(52260ea3-c039-482f-aa45-c9fe8097193a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce3ae041fb7539cdeb8695c5d7ba85c4ba8cc37d3cb41d7bb3c6070fdd20bf2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9795f7446-skblz" podUID="52260ea3-c039-482f-aa45-c9fe8097193a" Aug 13 00:47:39.169938 containerd[1524]: time="2025-08-13T00:47:39.169795718Z" level=error msg="Failed to destroy network for sandbox \"e905bb123bd4ffb321a3347545c2c2be53788b3cb638436869f2ac4a3c7c7df4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.171034 containerd[1524]: time="2025-08-13T00:47:39.170908290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9795f7446-8x9gq,Uid:f5957a81-5c65-4759-b9bd-1931ea6ed193,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e905bb123bd4ffb321a3347545c2c2be53788b3cb638436869f2ac4a3c7c7df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.171288 kubelet[2686]: E0813 00:47:39.171250 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e905bb123bd4ffb321a3347545c2c2be53788b3cb638436869f2ac4a3c7c7df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.171370 kubelet[2686]: E0813 00:47:39.171326 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e905bb123bd4ffb321a3347545c2c2be53788b3cb638436869f2ac4a3c7c7df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9795f7446-8x9gq" Aug 13 00:47:39.171413 kubelet[2686]: E0813 00:47:39.171390 2686 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e905bb123bd4ffb321a3347545c2c2be53788b3cb638436869f2ac4a3c7c7df4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9795f7446-8x9gq" Aug 13 00:47:39.171498 kubelet[2686]: E0813 00:47:39.171460 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9795f7446-8x9gq_calico-apiserver(f5957a81-5c65-4759-b9bd-1931ea6ed193)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9795f7446-8x9gq_calico-apiserver(f5957a81-5c65-4759-b9bd-1931ea6ed193)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e905bb123bd4ffb321a3347545c2c2be53788b3cb638436869f2ac4a3c7c7df4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9795f7446-8x9gq" podUID="f5957a81-5c65-4759-b9bd-1931ea6ed193" Aug 13 00:47:39.615067 containerd[1524]: time="2025-08-13T00:47:39.615003421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zjj9v,Uid:4bb91506-e966-4bb2-9481-a569c1240861,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:39.739139 containerd[1524]: time="2025-08-13T00:47:39.739071509Z" level=error msg="Failed to destroy network for sandbox \"4ccae849eb0c37c9e0d7ea8f5b24b050062f54d7e9d4098322a2efc98329aa11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.740583 containerd[1524]: time="2025-08-13T00:47:39.740377301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zjj9v,Uid:4bb91506-e966-4bb2-9481-a569c1240861,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccae849eb0c37c9e0d7ea8f5b24b050062f54d7e9d4098322a2efc98329aa11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.740812 kubelet[2686]: E0813 00:47:39.740738 2686 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccae849eb0c37c9e0d7ea8f5b24b050062f54d7e9d4098322a2efc98329aa11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:47:39.740812 kubelet[2686]: E0813 00:47:39.740806 2686 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccae849eb0c37c9e0d7ea8f5b24b050062f54d7e9d4098322a2efc98329aa11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-zjj9v" Aug 13 00:47:39.741393 kubelet[2686]: E0813 00:47:39.740827 2686 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ccae849eb0c37c9e0d7ea8f5b24b050062f54d7e9d4098322a2efc98329aa11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-zjj9v" Aug 13 00:47:39.741864 kubelet[2686]: E0813 00:47:39.741600 2686 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-zjj9v_calico-system(4bb91506-e966-4bb2-9481-a569c1240861)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-zjj9v_calico-system(4bb91506-e966-4bb2-9481-a569c1240861)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ccae849eb0c37c9e0d7ea8f5b24b050062f54d7e9d4098322a2efc98329aa11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-zjj9v" podUID="4bb91506-e966-4bb2-9481-a569c1240861" Aug 13 00:47:39.840927 systemd[1]: run-netns-cni\x2d78deb917\x2d5f2e\x2defcb\x2d0cf7\x2df0f33d3dcef8.mount: Deactivated successfully. Aug 13 00:47:39.841184 systemd[1]: run-netns-cni\x2d30a5f3b7\x2dd460\x2d9ef2\x2d4766\x2d2ee5d04eb037.mount: Deactivated successfully. Aug 13 00:47:44.174266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4252225315.mount: Deactivated successfully. Aug 13 00:47:44.228448 containerd[1524]: time="2025-08-13T00:47:44.228332904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:44.229982 containerd[1524]: time="2025-08-13T00:47:44.229938331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 00:47:44.234713 containerd[1524]: time="2025-08-13T00:47:44.234643928Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:44.237886 containerd[1524]: time="2025-08-13T00:47:44.237558976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:44.238594 containerd[1524]: time="2025-08-13T00:47:44.238552258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 6.44414443s" Aug 13 00:47:44.239306 containerd[1524]: time="2025-08-13T00:47:44.238743589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 00:47:44.277034 containerd[1524]: time="2025-08-13T00:47:44.276966409Z" level=info msg="CreateContainer within sandbox \"e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:47:44.294777 containerd[1524]: time="2025-08-13T00:47:44.294705873Z" level=info msg="Container 7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:44.370397 containerd[1524]: time="2025-08-13T00:47:44.370165134Z" level=info msg="CreateContainer within sandbox \"e92b80ed2b922a3998559a2750bc759f311d79eccc2f67055bdd29656c1c7edd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\"" Aug 13 00:47:44.371890 containerd[1524]: time="2025-08-13T00:47:44.371377229Z" level=info msg="StartContainer for \"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\"" Aug 13 00:47:44.373447 containerd[1524]: time="2025-08-13T00:47:44.373378535Z" level=info msg="connecting to shim 7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31" address="unix:///run/containerd/s/30d086130b558627f815add068b911d5f8845057a66ae521ed01e41eb5a2c1c9" protocol=ttrpc version=3 Aug 13 00:47:44.597742 systemd[1]: Started cri-containerd-7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31.scope - libcontainer container 7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31. Aug 13 00:47:44.711778 containerd[1524]: time="2025-08-13T00:47:44.711550211Z" level=info msg="StartContainer for \"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\" returns successfully" Aug 13 00:47:44.865088 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:47:44.865294 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:47:45.160221 kubelet[2686]: I0813 00:47:45.158758 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wphjn" podStartSLOduration=2.929781826 podStartE2EDuration="18.158727566s" podCreationTimestamp="2025-08-13 00:47:27 +0000 UTC" firstStartedPulling="2025-08-13 00:47:29.020466423 +0000 UTC m=+24.650684867" lastFinishedPulling="2025-08-13 00:47:44.249412163 +0000 UTC m=+39.879630607" observedRunningTime="2025-08-13 00:47:44.906415624 +0000 UTC m=+40.536634090" watchObservedRunningTime="2025-08-13 00:47:45.158727566 +0000 UTC m=+40.788946020" Aug 13 00:47:45.293121 kubelet[2686]: I0813 00:47:45.292116 2686 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3084cab9-6178-4e56-8e07-63fed0e9eb28-whisker-ca-bundle\") pod \"3084cab9-6178-4e56-8e07-63fed0e9eb28\" (UID: \"3084cab9-6178-4e56-8e07-63fed0e9eb28\") " Aug 13 00:47:45.293595 kubelet[2686]: I0813 00:47:45.293042 2686 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3084cab9-6178-4e56-8e07-63fed0e9eb28-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3084cab9-6178-4e56-8e07-63fed0e9eb28" (UID: "3084cab9-6178-4e56-8e07-63fed0e9eb28"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:47:45.294932 kubelet[2686]: I0813 00:47:45.294901 2686 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8n8b\" (UniqueName: \"kubernetes.io/projected/3084cab9-6178-4e56-8e07-63fed0e9eb28-kube-api-access-k8n8b\") pod \"3084cab9-6178-4e56-8e07-63fed0e9eb28\" (UID: \"3084cab9-6178-4e56-8e07-63fed0e9eb28\") " Aug 13 00:47:45.295111 kubelet[2686]: I0813 00:47:45.295084 2686 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3084cab9-6178-4e56-8e07-63fed0e9eb28-whisker-backend-key-pair\") pod \"3084cab9-6178-4e56-8e07-63fed0e9eb28\" (UID: \"3084cab9-6178-4e56-8e07-63fed0e9eb28\") " Aug 13 00:47:45.295352 kubelet[2686]: I0813 00:47:45.295322 2686 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3084cab9-6178-4e56-8e07-63fed0e9eb28-whisker-ca-bundle\") on node \"ci-4372.1.0-a-508df13d84\" DevicePath \"\"" Aug 13 00:47:45.308581 kubelet[2686]: I0813 00:47:45.308269 2686 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3084cab9-6178-4e56-8e07-63fed0e9eb28-kube-api-access-k8n8b" (OuterVolumeSpecName: "kube-api-access-k8n8b") pod "3084cab9-6178-4e56-8e07-63fed0e9eb28" (UID: "3084cab9-6178-4e56-8e07-63fed0e9eb28"). InnerVolumeSpecName "kube-api-access-k8n8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:47:45.309102 systemd[1]: var-lib-kubelet-pods-3084cab9\x2d6178\x2d4e56\x2d8e07\x2d63fed0e9eb28-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk8n8b.mount: Deactivated successfully. Aug 13 00:47:45.312126 kubelet[2686]: I0813 00:47:45.311977 2686 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3084cab9-6178-4e56-8e07-63fed0e9eb28-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3084cab9-6178-4e56-8e07-63fed0e9eb28" (UID: "3084cab9-6178-4e56-8e07-63fed0e9eb28"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:47:45.319872 systemd[1]: var-lib-kubelet-pods-3084cab9\x2d6178\x2d4e56\x2d8e07\x2d63fed0e9eb28-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:47:45.393048 containerd[1524]: time="2025-08-13T00:47:45.392989089Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\" id:\"88f587f4f315824ec78a4029a7dc34263437d4488306ae08c2632d1b35f498ee\" pid:3764 exit_status:1 exited_at:{seconds:1755046065 nanos:362793820}" Aug 13 00:47:45.396008 kubelet[2686]: I0813 00:47:45.395914 2686 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3084cab9-6178-4e56-8e07-63fed0e9eb28-whisker-backend-key-pair\") on node \"ci-4372.1.0-a-508df13d84\" DevicePath \"\"" Aug 13 00:47:45.396008 kubelet[2686]: I0813 00:47:45.395969 2686 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8n8b\" (UniqueName: \"kubernetes.io/projected/3084cab9-6178-4e56-8e07-63fed0e9eb28-kube-api-access-k8n8b\") on node \"ci-4372.1.0-a-508df13d84\" DevicePath \"\"" Aug 13 00:47:45.879808 systemd[1]: Removed slice kubepods-besteffort-pod3084cab9_6178_4e56_8e07_63fed0e9eb28.slice - libcontainer container kubepods-besteffort-pod3084cab9_6178_4e56_8e07_63fed0e9eb28.slice. Aug 13 00:47:45.961554 systemd[1]: Created slice kubepods-besteffort-pod25565d4a_2570_463a_8533_e37fc253a508.slice - libcontainer container kubepods-besteffort-pod25565d4a_2570_463a_8533_e37fc253a508.slice. Aug 13 00:47:46.002378 kubelet[2686]: I0813 00:47:46.001098 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25565d4a-2570-463a-8533-e37fc253a508-whisker-ca-bundle\") pod \"whisker-55bc7dc5fb-jtr6m\" (UID: \"25565d4a-2570-463a-8533-e37fc253a508\") " pod="calico-system/whisker-55bc7dc5fb-jtr6m" Aug 13 00:47:46.002378 kubelet[2686]: I0813 00:47:46.001170 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fct4d\" (UniqueName: \"kubernetes.io/projected/25565d4a-2570-463a-8533-e37fc253a508-kube-api-access-fct4d\") pod \"whisker-55bc7dc5fb-jtr6m\" (UID: \"25565d4a-2570-463a-8533-e37fc253a508\") " pod="calico-system/whisker-55bc7dc5fb-jtr6m" Aug 13 00:47:46.002378 kubelet[2686]: I0813 00:47:46.001219 2686 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/25565d4a-2570-463a-8533-e37fc253a508-whisker-backend-key-pair\") pod \"whisker-55bc7dc5fb-jtr6m\" (UID: \"25565d4a-2570-463a-8533-e37fc253a508\") " pod="calico-system/whisker-55bc7dc5fb-jtr6m" Aug 13 00:47:46.064066 containerd[1524]: time="2025-08-13T00:47:46.064017319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\" id:\"29eb3306c08e26c5db08e6ed13970a72734f7ebfe84f59870520cdc21af2d075\" pid:3807 exit_status:1 exited_at:{seconds:1755046066 nanos:62929989}" Aug 13 00:47:46.276059 containerd[1524]: time="2025-08-13T00:47:46.275824617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bc7dc5fb-jtr6m,Uid:25565d4a-2570-463a-8533-e37fc253a508,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:46.544275 kubelet[2686]: I0813 00:47:46.544140 2686 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3084cab9-6178-4e56-8e07-63fed0e9eb28" path="/var/lib/kubelet/pods/3084cab9-6178-4e56-8e07-63fed0e9eb28/volumes" Aug 13 00:47:46.704102 systemd-networkd[1452]: cali2d80e896b74: Link UP Aug 13 00:47:46.706273 systemd-networkd[1452]: cali2d80e896b74: Gained carrier Aug 13 00:47:46.738460 containerd[1524]: 2025-08-13 00:47:46.415 [INFO][3821] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:46.738460 containerd[1524]: 2025-08-13 00:47:46.446 [INFO][3821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0 whisker-55bc7dc5fb- calico-system 25565d4a-2570-463a-8533-e37fc253a508 937 0 2025-08-13 00:47:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55bc7dc5fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.1.0-a-508df13d84 whisker-55bc7dc5fb-jtr6m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2d80e896b74 [] [] }} ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Namespace="calico-system" Pod="whisker-55bc7dc5fb-jtr6m" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-" Aug 13 00:47:46.738460 containerd[1524]: 2025-08-13 00:47:46.446 [INFO][3821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Namespace="calico-system" Pod="whisker-55bc7dc5fb-jtr6m" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" Aug 13 00:47:46.738460 containerd[1524]: 2025-08-13 00:47:46.604 [INFO][3834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" HandleID="k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Workload="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.606 [INFO][3834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" HandleID="k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Workload="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000323730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-a-508df13d84", "pod":"whisker-55bc7dc5fb-jtr6m", "timestamp":"2025-08-13 00:47:46.604374622 +0000 UTC"}, Hostname:"ci-4372.1.0-a-508df13d84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.607 [INFO][3834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.609 [INFO][3834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.610 [INFO][3834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-a-508df13d84' Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.627 [INFO][3834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.640 [INFO][3834] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.655 [INFO][3834] ipam/ipam.go 511: Trying affinity for 192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.657 [INFO][3834] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739045 containerd[1524]: 2025-08-13 00:47:46.660 [INFO][3834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739283 containerd[1524]: 2025-08-13 00:47:46.660 [INFO][3834] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739283 containerd[1524]: 2025-08-13 00:47:46.663 [INFO][3834] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955 Aug 13 00:47:46.739283 containerd[1524]: 2025-08-13 00:47:46.669 [INFO][3834] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739283 containerd[1524]: 2025-08-13 00:47:46.676 [INFO][3834] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.1/26] block=192.168.126.0/26 handle="k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739283 containerd[1524]: 2025-08-13 00:47:46.676 [INFO][3834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.1/26] handle="k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:46.739283 containerd[1524]: 2025-08-13 00:47:46.676 [INFO][3834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:46.739283 containerd[1524]: 2025-08-13 00:47:46.676 [INFO][3834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.1/26] IPv6=[] ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" HandleID="k8s-pod-network.bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Workload="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" Aug 13 00:47:46.739512 containerd[1524]: 2025-08-13 00:47:46.682 [INFO][3821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Namespace="calico-system" Pod="whisker-55bc7dc5fb-jtr6m" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0", GenerateName:"whisker-55bc7dc5fb-", Namespace:"calico-system", SelfLink:"", UID:"25565d4a-2570-463a-8533-e37fc253a508", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55bc7dc5fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"", Pod:"whisker-55bc7dc5fb-jtr6m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d80e896b74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:46.739512 containerd[1524]: 2025-08-13 00:47:46.682 [INFO][3821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.1/32] ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Namespace="calico-system" Pod="whisker-55bc7dc5fb-jtr6m" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" Aug 13 00:47:46.739598 containerd[1524]: 2025-08-13 00:47:46.682 [INFO][3821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d80e896b74 ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Namespace="calico-system" Pod="whisker-55bc7dc5fb-jtr6m" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" Aug 13 00:47:46.739598 containerd[1524]: 2025-08-13 00:47:46.708 [INFO][3821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Namespace="calico-system" Pod="whisker-55bc7dc5fb-jtr6m" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" Aug 13 00:47:46.739652 containerd[1524]: 2025-08-13 00:47:46.710 [INFO][3821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Namespace="calico-system" Pod="whisker-55bc7dc5fb-jtr6m" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0", GenerateName:"whisker-55bc7dc5fb-", Namespace:"calico-system", SelfLink:"", UID:"25565d4a-2570-463a-8533-e37fc253a508", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55bc7dc5fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955", Pod:"whisker-55bc7dc5fb-jtr6m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2d80e896b74", MAC:"62:b0:25:10:46:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:46.739707 containerd[1524]: 2025-08-13 00:47:46.729 [INFO][3821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" Namespace="calico-system" Pod="whisker-55bc7dc5fb-jtr6m" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-whisker--55bc7dc5fb--jtr6m-eth0" Aug 13 00:47:46.874801 containerd[1524]: time="2025-08-13T00:47:46.874614744Z" level=info msg="connecting to shim bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955" address="unix:///run/containerd/s/52469ff02ba630c9333144fd32260db6c1803225a38884b4294d049091d17ddc" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:46.978870 systemd[1]: Started cri-containerd-bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955.scope - libcontainer container bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955. Aug 13 00:47:47.145226 containerd[1524]: time="2025-08-13T00:47:47.145080245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55bc7dc5fb-jtr6m,Uid:25565d4a-2570-463a-8533-e37fc253a508,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955\"" Aug 13 00:47:47.153688 containerd[1524]: time="2025-08-13T00:47:47.153580169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:47:47.421929 containerd[1524]: time="2025-08-13T00:47:47.421777903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\" id:\"35d9d35c073517352354eebdd7205dc751d4956a32468a4415c9e0695623e658\" pid:3976 exit_status:1 exited_at:{seconds:1755046067 nanos:421094474}" Aug 13 00:47:48.089704 systemd-networkd[1452]: cali2d80e896b74: Gained IPv6LL Aug 13 00:47:48.674475 containerd[1524]: time="2025-08-13T00:47:48.673821339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:48.675497 containerd[1524]: time="2025-08-13T00:47:48.675449822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 00:47:48.675918 containerd[1524]: time="2025-08-13T00:47:48.675863986Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:48.691344 containerd[1524]: time="2025-08-13T00:47:48.691286803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:48.692445 containerd[1524]: time="2025-08-13T00:47:48.692196309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.538546863s" Aug 13 00:47:48.692445 containerd[1524]: time="2025-08-13T00:47:48.692247188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 00:47:48.697614 containerd[1524]: time="2025-08-13T00:47:48.697549263Z" level=info msg="CreateContainer within sandbox \"bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:47:48.706462 containerd[1524]: time="2025-08-13T00:47:48.704931368Z" level=info msg="Container 3c0f9ed4a7a21d61d15dceffdaabd3fcd04f9b2c1b2e80f349c0ba5f1a2ab6ca: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:48.723251 containerd[1524]: time="2025-08-13T00:47:48.723195839Z" level=info msg="CreateContainer within sandbox \"bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3c0f9ed4a7a21d61d15dceffdaabd3fcd04f9b2c1b2e80f349c0ba5f1a2ab6ca\"" Aug 13 00:47:48.724723 containerd[1524]: time="2025-08-13T00:47:48.724593706Z" level=info msg="StartContainer for \"3c0f9ed4a7a21d61d15dceffdaabd3fcd04f9b2c1b2e80f349c0ba5f1a2ab6ca\"" Aug 13 00:47:48.742904 containerd[1524]: time="2025-08-13T00:47:48.742282111Z" level=info msg="connecting to shim 3c0f9ed4a7a21d61d15dceffdaabd3fcd04f9b2c1b2e80f349c0ba5f1a2ab6ca" address="unix:///run/containerd/s/52469ff02ba630c9333144fd32260db6c1803225a38884b4294d049091d17ddc" protocol=ttrpc version=3 Aug 13 00:47:48.778785 systemd[1]: Started cri-containerd-3c0f9ed4a7a21d61d15dceffdaabd3fcd04f9b2c1b2e80f349c0ba5f1a2ab6ca.scope - libcontainer container 3c0f9ed4a7a21d61d15dceffdaabd3fcd04f9b2c1b2e80f349c0ba5f1a2ab6ca. Aug 13 00:47:48.855568 containerd[1524]: time="2025-08-13T00:47:48.855511769Z" level=info msg="StartContainer for \"3c0f9ed4a7a21d61d15dceffdaabd3fcd04f9b2c1b2e80f349c0ba5f1a2ab6ca\" returns successfully" Aug 13 00:47:48.858239 containerd[1524]: time="2025-08-13T00:47:48.857889796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:47:50.543313 kubelet[2686]: E0813 00:47:50.543274 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:50.556183 containerd[1524]: time="2025-08-13T00:47:50.555821688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vpwj9,Uid:339e4b53-06d2-453f-8da7-d9d17cfe4e69,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:50.991167 systemd-networkd[1452]: califaf8e2feb25: Link UP Aug 13 00:47:50.995211 systemd-networkd[1452]: califaf8e2feb25: Gained carrier Aug 13 00:47:51.047110 containerd[1524]: 2025-08-13 00:47:50.651 [INFO][4092] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:51.047110 containerd[1524]: 2025-08-13 00:47:50.690 [INFO][4092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0 coredns-7c65d6cfc9- kube-system 339e4b53-06d2-453f-8da7-d9d17cfe4e69 858 0 2025-08-13 00:47:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-a-508df13d84 coredns-7c65d6cfc9-vpwj9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califaf8e2feb25 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vpwj9" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-" Aug 13 00:47:51.047110 containerd[1524]: 2025-08-13 00:47:50.692 [INFO][4092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vpwj9" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" Aug 13 00:47:51.047110 containerd[1524]: 2025-08-13 00:47:50.836 [INFO][4117] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" HandleID="k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Workload="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.838 [INFO][4117] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" HandleID="k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Workload="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003715a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-a-508df13d84", "pod":"coredns-7c65d6cfc9-vpwj9", "timestamp":"2025-08-13 00:47:50.836055717 +0000 UTC"}, Hostname:"ci-4372.1.0-a-508df13d84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.838 [INFO][4117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.838 [INFO][4117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.838 [INFO][4117] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-a-508df13d84' Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.864 [INFO][4117] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.877 [INFO][4117] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.890 [INFO][4117] ipam/ipam.go 511: Trying affinity for 192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.917 [INFO][4117] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.047981 containerd[1524]: 2025-08-13 00:47:50.928 [INFO][4117] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.048813 containerd[1524]: 2025-08-13 00:47:50.928 [INFO][4117] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.048813 containerd[1524]: 2025-08-13 00:47:50.931 [INFO][4117] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9 Aug 13 00:47:51.048813 containerd[1524]: 2025-08-13 00:47:50.942 [INFO][4117] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.048813 containerd[1524]: 2025-08-13 00:47:50.961 [INFO][4117] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.2/26] block=192.168.126.0/26 handle="k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.048813 containerd[1524]: 2025-08-13 00:47:50.962 [INFO][4117] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.2/26] handle="k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.048813 containerd[1524]: 2025-08-13 00:47:50.962 [INFO][4117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:51.048813 containerd[1524]: 2025-08-13 00:47:50.962 [INFO][4117] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.2/26] IPv6=[] ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" HandleID="k8s-pod-network.7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Workload="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" Aug 13 00:47:51.049108 containerd[1524]: 2025-08-13 00:47:50.980 [INFO][4092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vpwj9" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"339e4b53-06d2-453f-8da7-d9d17cfe4e69", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"", Pod:"coredns-7c65d6cfc9-vpwj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califaf8e2feb25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:51.049108 containerd[1524]: 2025-08-13 00:47:50.980 [INFO][4092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.2/32] ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vpwj9" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" Aug 13 00:47:51.049108 containerd[1524]: 2025-08-13 00:47:50.980 [INFO][4092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califaf8e2feb25 ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vpwj9" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" Aug 13 00:47:51.049108 containerd[1524]: 2025-08-13 00:47:51.003 [INFO][4092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vpwj9" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" Aug 13 00:47:51.049108 containerd[1524]: 2025-08-13 00:47:51.010 [INFO][4092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vpwj9" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"339e4b53-06d2-453f-8da7-d9d17cfe4e69", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9", Pod:"coredns-7c65d6cfc9-vpwj9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califaf8e2feb25", MAC:"f6:3b:8b:f4:89:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:51.049108 containerd[1524]: 2025-08-13 00:47:51.029 [INFO][4092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vpwj9" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--vpwj9-eth0" Aug 13 00:47:51.116147 containerd[1524]: time="2025-08-13T00:47:51.115737468Z" level=info msg="connecting to shim 7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9" address="unix:///run/containerd/s/641103c19b7c2e05dfd2d9d8d6465367f38ad6cae251e2e28f81b15cf4575b6a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:51.192753 systemd[1]: Started cri-containerd-7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9.scope - libcontainer container 7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9. Aug 13 00:47:51.278560 containerd[1524]: time="2025-08-13T00:47:51.278384764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vpwj9,Uid:339e4b53-06d2-453f-8da7-d9d17cfe4e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9\"" Aug 13 00:47:51.280796 kubelet[2686]: E0813 00:47:51.280765 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:51.284747 containerd[1524]: time="2025-08-13T00:47:51.284414029Z" level=info msg="CreateContainer within sandbox \"7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:47:51.308071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999667591.mount: Deactivated successfully. Aug 13 00:47:51.313091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675608365.mount: Deactivated successfully. Aug 13 00:47:51.314263 containerd[1524]: time="2025-08-13T00:47:51.313782173Z" level=info msg="Container f79f19324f09498a01ebb2fcc9c10af2ae25be5dc4d3a0a2b95ecd2ac9246f47: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:51.321627 containerd[1524]: time="2025-08-13T00:47:51.321581339Z" level=info msg="CreateContainer within sandbox \"7daa07bbad51b19e332d88e939bc91a3ceeb6a471f8ce2e97fd29d07debd53e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f79f19324f09498a01ebb2fcc9c10af2ae25be5dc4d3a0a2b95ecd2ac9246f47\"" Aug 13 00:47:51.324900 containerd[1524]: time="2025-08-13T00:47:51.323081370Z" level=info msg="StartContainer for \"f79f19324f09498a01ebb2fcc9c10af2ae25be5dc4d3a0a2b95ecd2ac9246f47\"" Aug 13 00:47:51.327467 containerd[1524]: time="2025-08-13T00:47:51.327121406Z" level=info msg="connecting to shim f79f19324f09498a01ebb2fcc9c10af2ae25be5dc4d3a0a2b95ecd2ac9246f47" address="unix:///run/containerd/s/641103c19b7c2e05dfd2d9d8d6465367f38ad6cae251e2e28f81b15cf4575b6a" protocol=ttrpc version=3 Aug 13 00:47:51.355687 systemd[1]: Started cri-containerd-f79f19324f09498a01ebb2fcc9c10af2ae25be5dc4d3a0a2b95ecd2ac9246f47.scope - libcontainer container f79f19324f09498a01ebb2fcc9c10af2ae25be5dc4d3a0a2b95ecd2ac9246f47. Aug 13 00:47:51.359356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439012147.mount: Deactivated successfully. Aug 13 00:47:51.396325 containerd[1524]: time="2025-08-13T00:47:51.396272324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:51.397938 containerd[1524]: time="2025-08-13T00:47:51.397895324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 00:47:51.399066 containerd[1524]: time="2025-08-13T00:47:51.399033618Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:51.402872 containerd[1524]: time="2025-08-13T00:47:51.402813369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:51.403942 containerd[1524]: time="2025-08-13T00:47:51.403791854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 2.545857542s" Aug 13 00:47:51.404130 containerd[1524]: time="2025-08-13T00:47:51.404105969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 00:47:51.413166 containerd[1524]: time="2025-08-13T00:47:51.413126179Z" level=info msg="CreateContainer within sandbox \"bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:47:51.419868 containerd[1524]: time="2025-08-13T00:47:51.419814832Z" level=info msg="Container b857c4997faf0873a15c262e95ac3d4c7becb9ec3dfd55df795230a9c19604f0: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:51.438343 containerd[1524]: time="2025-08-13T00:47:51.438262121Z" level=info msg="StartContainer for \"f79f19324f09498a01ebb2fcc9c10af2ae25be5dc4d3a0a2b95ecd2ac9246f47\" returns successfully" Aug 13 00:47:51.450743 containerd[1524]: time="2025-08-13T00:47:51.450628167Z" level=info msg="CreateContainer within sandbox \"bdcd72a0d88cd5ba1de14654eadab171879c5788c612cb507127c5c9e1c6a955\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b857c4997faf0873a15c262e95ac3d4c7becb9ec3dfd55df795230a9c19604f0\"" Aug 13 00:47:51.452541 containerd[1524]: time="2025-08-13T00:47:51.452323748Z" level=info msg="StartContainer for \"b857c4997faf0873a15c262e95ac3d4c7becb9ec3dfd55df795230a9c19604f0\"" Aug 13 00:47:51.455169 containerd[1524]: time="2025-08-13T00:47:51.455128752Z" level=info msg="connecting to shim b857c4997faf0873a15c262e95ac3d4c7becb9ec3dfd55df795230a9c19604f0" address="unix:///run/containerd/s/52469ff02ba630c9333144fd32260db6c1803225a38884b4294d049091d17ddc" protocol=ttrpc version=3 Aug 13 00:47:51.483719 systemd[1]: Started cri-containerd-b857c4997faf0873a15c262e95ac3d4c7becb9ec3dfd55df795230a9c19604f0.scope - libcontainer container b857c4997faf0873a15c262e95ac3d4c7becb9ec3dfd55df795230a9c19604f0. Aug 13 00:47:51.543369 containerd[1524]: time="2025-08-13T00:47:51.543129835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpcdb,Uid:303998f8-9f7b-469e-a6ba-d51ab6db7a8a,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:51.586703 containerd[1524]: time="2025-08-13T00:47:51.586397643Z" level=info msg="StartContainer for \"b857c4997faf0873a15c262e95ac3d4c7becb9ec3dfd55df795230a9c19604f0\" returns successfully" Aug 13 00:47:51.804167 systemd-networkd[1452]: calie3e478b2eaa: Link UP Aug 13 00:47:51.807716 systemd-networkd[1452]: calie3e478b2eaa: Gained carrier Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.612 [INFO][4240] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.651 [INFO][4240] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0 csi-node-driver- calico-system 303998f8-9f7b-469e-a6ba-d51ab6db7a8a 757 0 2025-08-13 00:47:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.1.0-a-508df13d84 csi-node-driver-tpcdb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie3e478b2eaa [] [] }} ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Namespace="calico-system" Pod="csi-node-driver-tpcdb" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.655 [INFO][4240] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Namespace="calico-system" Pod="csi-node-driver-tpcdb" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.728 [INFO][4263] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" HandleID="k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Workload="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.729 [INFO][4263] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" HandleID="k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Workload="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf280), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-a-508df13d84", "pod":"csi-node-driver-tpcdb", "timestamp":"2025-08-13 00:47:51.728407267 +0000 UTC"}, Hostname:"ci-4372.1.0-a-508df13d84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.729 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.729 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.729 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-a-508df13d84' Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.738 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.746 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.755 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.758 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.761 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.762 [INFO][4263] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.764 [INFO][4263] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1 Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.770 [INFO][4263] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.791 [INFO][4263] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.3/26] block=192.168.126.0/26 handle="k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.791 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.3/26] handle="k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.791 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:51.839380 containerd[1524]: 2025-08-13 00:47:51.791 [INFO][4263] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.3/26] IPv6=[] ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" HandleID="k8s-pod-network.1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Workload="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" Aug 13 00:47:51.841822 containerd[1524]: 2025-08-13 00:47:51.797 [INFO][4240] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Namespace="calico-system" Pod="csi-node-driver-tpcdb" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"303998f8-9f7b-469e-a6ba-d51ab6db7a8a", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"", Pod:"csi-node-driver-tpcdb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3e478b2eaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:51.841822 containerd[1524]: 2025-08-13 00:47:51.797 [INFO][4240] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.3/32] ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Namespace="calico-system" Pod="csi-node-driver-tpcdb" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" Aug 13 00:47:51.841822 containerd[1524]: 2025-08-13 00:47:51.797 [INFO][4240] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3e478b2eaa ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Namespace="calico-system" Pod="csi-node-driver-tpcdb" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" Aug 13 00:47:51.841822 containerd[1524]: 2025-08-13 00:47:51.802 [INFO][4240] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Namespace="calico-system" Pod="csi-node-driver-tpcdb" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" Aug 13 00:47:51.841822 containerd[1524]: 2025-08-13 00:47:51.803 [INFO][4240] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Namespace="calico-system" Pod="csi-node-driver-tpcdb" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"303998f8-9f7b-469e-a6ba-d51ab6db7a8a", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1", Pod:"csi-node-driver-tpcdb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3e478b2eaa", MAC:"42:73:1b:30:fc:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:51.841822 containerd[1524]: 2025-08-13 00:47:51.833 [INFO][4240] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" Namespace="calico-system" Pod="csi-node-driver-tpcdb" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-csi--node--driver--tpcdb-eth0" Aug 13 00:47:51.879821 containerd[1524]: time="2025-08-13T00:47:51.879649518Z" level=info msg="connecting to shim 1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1" address="unix:///run/containerd/s/4a200bb7587189bdec734a5627eca6605cfc419551a58c37603cc6feff9184df" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:51.911593 kubelet[2686]: E0813 00:47:51.911560 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:51.957699 systemd[1]: Started cri-containerd-1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1.scope - libcontainer container 1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1. Aug 13 00:47:51.991908 kubelet[2686]: I0813 00:47:51.991699 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vpwj9" podStartSLOduration=41.99167695 podStartE2EDuration="41.99167695s" podCreationTimestamp="2025-08-13 00:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:51.966188998 +0000 UTC m=+47.596407448" watchObservedRunningTime="2025-08-13 00:47:51.99167695 +0000 UTC m=+47.621895402" Aug 13 00:47:52.035057 kubelet[2686]: I0813 00:47:52.033677 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55bc7dc5fb-jtr6m" podStartSLOduration=2.779136812 podStartE2EDuration="7.033649827s" podCreationTimestamp="2025-08-13 00:47:45 +0000 UTC" firstStartedPulling="2025-08-13 00:47:47.151961505 +0000 UTC m=+42.782179949" lastFinishedPulling="2025-08-13 00:47:51.406474517 +0000 UTC m=+47.036692964" observedRunningTime="2025-08-13 00:47:52.033313217 +0000 UTC m=+47.663531671" watchObservedRunningTime="2025-08-13 00:47:52.033649827 +0000 UTC m=+47.663868286" Aug 13 00:47:52.121383 containerd[1524]: time="2025-08-13T00:47:52.121221978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpcdb,Uid:303998f8-9f7b-469e-a6ba-d51ab6db7a8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1\"" Aug 13 00:47:52.125793 containerd[1524]: time="2025-08-13T00:47:52.125740506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:47:52.542412 kubelet[2686]: E0813 00:47:52.541067 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:52.544069 containerd[1524]: time="2025-08-13T00:47:52.543783017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2fk4r,Uid:b3aee0c3-d22c-4923-83d0-801fd3bfb671,Namespace:kube-system,Attempt:0,}" Aug 13 00:47:52.710108 systemd-networkd[1452]: calidb9d9a63cc4: Link UP Aug 13 00:47:52.711248 systemd-networkd[1452]: calidb9d9a63cc4: Gained carrier Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.598 [INFO][4347] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.615 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0 coredns-7c65d6cfc9- kube-system b3aee0c3-d22c-4923-83d0-801fd3bfb671 868 0 2025-08-13 00:47:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.1.0-a-508df13d84 coredns-7c65d6cfc9-2fk4r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb9d9a63cc4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2fk4r" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.616 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2fk4r" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.658 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" HandleID="k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Workload="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.659 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" HandleID="k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Workload="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.1.0-a-508df13d84", "pod":"coredns-7c65d6cfc9-2fk4r", "timestamp":"2025-08-13 00:47:52.658973003 +0000 UTC"}, Hostname:"ci-4372.1.0-a-508df13d84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.659 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.659 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.659 [INFO][4359] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-a-508df13d84' Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.668 [INFO][4359] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.674 [INFO][4359] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.680 [INFO][4359] ipam/ipam.go 511: Trying affinity for 192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.683 [INFO][4359] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.686 [INFO][4359] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.686 [INFO][4359] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.688 [INFO][4359] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30 Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.693 [INFO][4359] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.701 [INFO][4359] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.4/26] block=192.168.126.0/26 handle="k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.701 [INFO][4359] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.4/26] handle="k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.701 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:52.734251 containerd[1524]: 2025-08-13 00:47:52.701 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.4/26] IPv6=[] ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" HandleID="k8s-pod-network.949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Workload="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" Aug 13 00:47:52.736889 containerd[1524]: 2025-08-13 00:47:52.705 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2fk4r" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b3aee0c3-d22c-4923-83d0-801fd3bfb671", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"", Pod:"coredns-7c65d6cfc9-2fk4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb9d9a63cc4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:52.736889 containerd[1524]: 2025-08-13 00:47:52.705 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.4/32] ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2fk4r" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" Aug 13 00:47:52.736889 containerd[1524]: 2025-08-13 00:47:52.705 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb9d9a63cc4 ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2fk4r" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" Aug 13 00:47:52.736889 containerd[1524]: 2025-08-13 00:47:52.712 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2fk4r" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" Aug 13 00:47:52.736889 containerd[1524]: 2025-08-13 00:47:52.712 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2fk4r" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b3aee0c3-d22c-4923-83d0-801fd3bfb671", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30", Pod:"coredns-7c65d6cfc9-2fk4r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb9d9a63cc4", MAC:"da:21:25:bd:60:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:52.736889 containerd[1524]: 2025-08-13 00:47:52.728 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" Namespace="kube-system" Pod="coredns-7c65d6cfc9-2fk4r" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-coredns--7c65d6cfc9--2fk4r-eth0" Aug 13 00:47:52.774483 containerd[1524]: time="2025-08-13T00:47:52.773929708Z" level=info msg="connecting to shim 949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30" address="unix:///run/containerd/s/636fd3583d662f057f29f708ef78ddf111ed4b6b855ae73c621322fc8e8b9f31" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:52.814728 systemd[1]: Started cri-containerd-949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30.scope - libcontainer container 949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30. Aug 13 00:47:52.882978 containerd[1524]: time="2025-08-13T00:47:52.882926608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2fk4r,Uid:b3aee0c3-d22c-4923-83d0-801fd3bfb671,Namespace:kube-system,Attempt:0,} returns sandbox id \"949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30\"" Aug 13 00:47:52.884452 kubelet[2686]: E0813 00:47:52.884388 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:52.889329 containerd[1524]: time="2025-08-13T00:47:52.889276892Z" level=info msg="CreateContainer within sandbox \"949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:47:52.901140 containerd[1524]: time="2025-08-13T00:47:52.900508953Z" level=info msg="Container bc4b05ff44aae2eeb431494b7929f1eed885cb327f3cd3bf6ca8b7eb24cc6c89: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:52.905878 containerd[1524]: time="2025-08-13T00:47:52.905833477Z" level=info msg="CreateContainer within sandbox \"949d82b049704fb91767ae69331d961965a16973555a823e050d083d94a67d30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc4b05ff44aae2eeb431494b7929f1eed885cb327f3cd3bf6ca8b7eb24cc6c89\"" Aug 13 00:47:52.907608 containerd[1524]: time="2025-08-13T00:47:52.907570359Z" level=info msg="StartContainer for \"bc4b05ff44aae2eeb431494b7929f1eed885cb327f3cd3bf6ca8b7eb24cc6c89\"" Aug 13 00:47:52.910292 containerd[1524]: time="2025-08-13T00:47:52.910235602Z" level=info msg="connecting to shim bc4b05ff44aae2eeb431494b7929f1eed885cb327f3cd3bf6ca8b7eb24cc6c89" address="unix:///run/containerd/s/636fd3583d662f057f29f708ef78ddf111ed4b6b855ae73c621322fc8e8b9f31" protocol=ttrpc version=3 Aug 13 00:47:52.936862 systemd[1]: Started cri-containerd-bc4b05ff44aae2eeb431494b7929f1eed885cb327f3cd3bf6ca8b7eb24cc6c89.scope - libcontainer container bc4b05ff44aae2eeb431494b7929f1eed885cb327f3cd3bf6ca8b7eb24cc6c89. Aug 13 00:47:52.942926 kubelet[2686]: E0813 00:47:52.942870 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:52.993564 containerd[1524]: time="2025-08-13T00:47:52.993495412Z" level=info msg="StartContainer for \"bc4b05ff44aae2eeb431494b7929f1eed885cb327f3cd3bf6ca8b7eb24cc6c89\" returns successfully" Aug 13 00:47:53.017983 systemd-networkd[1452]: calie3e478b2eaa: Gained IPv6LL Aug 13 00:47:53.018339 systemd-networkd[1452]: califaf8e2feb25: Gained IPv6LL Aug 13 00:47:53.447527 containerd[1524]: time="2025-08-13T00:47:53.447467901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:53.450443 containerd[1524]: time="2025-08-13T00:47:53.449764374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 00:47:53.450443 containerd[1524]: time="2025-08-13T00:47:53.450265783Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:53.453149 containerd[1524]: time="2025-08-13T00:47:53.453099191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:53.456089 containerd[1524]: time="2025-08-13T00:47:53.456025788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.330229595s" Aug 13 00:47:53.456089 containerd[1524]: time="2025-08-13T00:47:53.456082256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 00:47:53.459370 containerd[1524]: time="2025-08-13T00:47:53.459327047Z" level=info msg="CreateContainer within sandbox \"1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:47:53.482082 containerd[1524]: time="2025-08-13T00:47:53.479279693Z" level=info msg="Container 65b98fa1e8f84ba92b933066de989e039f95e7dabd99953da70cc92bbbd75c49: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:53.497272 containerd[1524]: time="2025-08-13T00:47:53.497220359Z" level=info msg="CreateContainer within sandbox \"1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"65b98fa1e8f84ba92b933066de989e039f95e7dabd99953da70cc92bbbd75c49\"" Aug 13 00:47:53.498124 containerd[1524]: time="2025-08-13T00:47:53.498084043Z" level=info msg="StartContainer for \"65b98fa1e8f84ba92b933066de989e039f95e7dabd99953da70cc92bbbd75c49\"" Aug 13 00:47:53.500485 containerd[1524]: time="2025-08-13T00:47:53.500447008Z" level=info msg="connecting to shim 65b98fa1e8f84ba92b933066de989e039f95e7dabd99953da70cc92bbbd75c49" address="unix:///run/containerd/s/4a200bb7587189bdec734a5627eca6605cfc419551a58c37603cc6feff9184df" protocol=ttrpc version=3 Aug 13 00:47:53.538866 systemd[1]: Started cri-containerd-65b98fa1e8f84ba92b933066de989e039f95e7dabd99953da70cc92bbbd75c49.scope - libcontainer container 65b98fa1e8f84ba92b933066de989e039f95e7dabd99953da70cc92bbbd75c49. Aug 13 00:47:53.542691 containerd[1524]: time="2025-08-13T00:47:53.542381106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9795f7446-skblz,Uid:52260ea3-c039-482f-aa45-c9fe8097193a,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:47:53.548444 containerd[1524]: time="2025-08-13T00:47:53.548266282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b7c9845d-k54mx,Uid:8bce8461-34c3-42fd-82fd-7820d68ff1b5,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:53.794383 containerd[1524]: time="2025-08-13T00:47:53.794140304Z" level=info msg="StartContainer for \"65b98fa1e8f84ba92b933066de989e039f95e7dabd99953da70cc92bbbd75c49\" returns successfully" Aug 13 00:47:53.797149 containerd[1524]: time="2025-08-13T00:47:53.797036101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:47:53.816347 systemd-networkd[1452]: calia8201a28135: Link UP Aug 13 00:47:53.818345 systemd-networkd[1452]: calia8201a28135: Gained carrier Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.609 [INFO][4489] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.630 [INFO][4489] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0 calico-kube-controllers-59b7c9845d- calico-system 8bce8461-34c3-42fd-82fd-7820d68ff1b5 869 0 2025-08-13 00:47:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59b7c9845d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.1.0-a-508df13d84 calico-kube-controllers-59b7c9845d-k54mx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia8201a28135 [] [] }} ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Namespace="calico-system" Pod="calico-kube-controllers-59b7c9845d-k54mx" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.631 [INFO][4489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Namespace="calico-system" Pod="calico-kube-controllers-59b7c9845d-k54mx" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.701 [INFO][4518] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" HandleID="k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.702 [INFO][4518] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" HandleID="k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003234b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-a-508df13d84", "pod":"calico-kube-controllers-59b7c9845d-k54mx", "timestamp":"2025-08-13 00:47:53.701493024 +0000 UTC"}, Hostname:"ci-4372.1.0-a-508df13d84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.702 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.702 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.702 [INFO][4518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-a-508df13d84' Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.716 [INFO][4518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.742 [INFO][4518] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.757 [INFO][4518] ipam/ipam.go 511: Trying affinity for 192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.764 [INFO][4518] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.771 [INFO][4518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.772 [INFO][4518] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.776 [INFO][4518] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595 Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.786 [INFO][4518] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.802 [INFO][4518] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.5/26] block=192.168.126.0/26 handle="k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.802 [INFO][4518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.5/26] handle="k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.803 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:53.849143 containerd[1524]: 2025-08-13 00:47:53.803 [INFO][4518] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.5/26] IPv6=[] ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" HandleID="k8s-pod-network.a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" Aug 13 00:47:53.849901 containerd[1524]: 2025-08-13 00:47:53.808 [INFO][4489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Namespace="calico-system" Pod="calico-kube-controllers-59b7c9845d-k54mx" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0", GenerateName:"calico-kube-controllers-59b7c9845d-", Namespace:"calico-system", SelfLink:"", UID:"8bce8461-34c3-42fd-82fd-7820d68ff1b5", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b7c9845d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"", Pod:"calico-kube-controllers-59b7c9845d-k54mx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia8201a28135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:53.849901 containerd[1524]: 2025-08-13 00:47:53.808 [INFO][4489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.5/32] ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Namespace="calico-system" Pod="calico-kube-controllers-59b7c9845d-k54mx" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" Aug 13 00:47:53.849901 containerd[1524]: 2025-08-13 00:47:53.808 [INFO][4489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8201a28135 ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Namespace="calico-system" Pod="calico-kube-controllers-59b7c9845d-k54mx" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" Aug 13 00:47:53.849901 containerd[1524]: 2025-08-13 00:47:53.822 [INFO][4489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Namespace="calico-system" Pod="calico-kube-controllers-59b7c9845d-k54mx" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" Aug 13 00:47:53.849901 containerd[1524]: 2025-08-13 00:47:53.825 [INFO][4489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Namespace="calico-system" Pod="calico-kube-controllers-59b7c9845d-k54mx" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0", GenerateName:"calico-kube-controllers-59b7c9845d-", Namespace:"calico-system", SelfLink:"", UID:"8bce8461-34c3-42fd-82fd-7820d68ff1b5", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b7c9845d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595", Pod:"calico-kube-controllers-59b7c9845d-k54mx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia8201a28135", MAC:"3e:85:75:38:53:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:53.849901 containerd[1524]: 2025-08-13 00:47:53.841 [INFO][4489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" Namespace="calico-system" Pod="calico-kube-controllers-59b7c9845d-k54mx" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--kube--controllers--59b7c9845d--k54mx-eth0" Aug 13 00:47:53.877540 containerd[1524]: time="2025-08-13T00:47:53.877461372Z" level=info msg="connecting to shim a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595" address="unix:///run/containerd/s/ef4d474dfc4db9d17598b493db748bc1e8d73de9f4b0db0e7218f3fa3502b0ad" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:53.896438 systemd-networkd[1452]: calic9a2827f752: Link UP Aug 13 00:47:53.897512 systemd-networkd[1452]: calic9a2827f752: Gained carrier Aug 13 00:47:53.913374 systemd[1]: Started cri-containerd-a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595.scope - libcontainer container a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595. Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.671 [INFO][4505] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.712 [INFO][4505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0 calico-apiserver-9795f7446- calico-apiserver 52260ea3-c039-482f-aa45-c9fe8097193a 865 0 2025-08-13 00:47:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9795f7446 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-a-508df13d84 calico-apiserver-9795f7446-skblz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic9a2827f752 [] [] }} ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-skblz" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.712 [INFO][4505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-skblz" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.804 [INFO][4528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" HandleID="k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.804 [INFO][4528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" HandleID="k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-a-508df13d84", "pod":"calico-apiserver-9795f7446-skblz", "timestamp":"2025-08-13 00:47:53.803978757 +0000 UTC"}, Hostname:"ci-4372.1.0-a-508df13d84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.804 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.804 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.804 [INFO][4528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-a-508df13d84' Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.823 [INFO][4528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.843 [INFO][4528] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.854 [INFO][4528] ipam/ipam.go 511: Trying affinity for 192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.858 [INFO][4528] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.864 [INFO][4528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.864 [INFO][4528] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.866 [INFO][4528] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.874 [INFO][4528] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.886 [INFO][4528] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.6/26] block=192.168.126.0/26 handle="k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.887 [INFO][4528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.6/26] handle="k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.887 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:53.930563 containerd[1524]: 2025-08-13 00:47:53.887 [INFO][4528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.6/26] IPv6=[] ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" HandleID="k8s-pod-network.ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" Aug 13 00:47:53.931640 containerd[1524]: 2025-08-13 00:47:53.891 [INFO][4505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-skblz" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0", GenerateName:"calico-apiserver-9795f7446-", Namespace:"calico-apiserver", SelfLink:"", UID:"52260ea3-c039-482f-aa45-c9fe8097193a", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9795f7446", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"", Pod:"calico-apiserver-9795f7446-skblz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9a2827f752", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:53.931640 containerd[1524]: 2025-08-13 00:47:53.891 [INFO][4505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.6/32] ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-skblz" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" Aug 13 00:47:53.931640 containerd[1524]: 2025-08-13 00:47:53.891 [INFO][4505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9a2827f752 ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-skblz" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" Aug 13 00:47:53.931640 containerd[1524]: 2025-08-13 00:47:53.898 [INFO][4505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-skblz" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" Aug 13 00:47:53.931640 containerd[1524]: 2025-08-13 00:47:53.900 [INFO][4505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-skblz" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0", GenerateName:"calico-apiserver-9795f7446-", Namespace:"calico-apiserver", SelfLink:"", UID:"52260ea3-c039-482f-aa45-c9fe8097193a", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9795f7446", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae", Pod:"calico-apiserver-9795f7446-skblz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic9a2827f752", MAC:"26:d4:de:84:ba:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:53.931640 containerd[1524]: 2025-08-13 00:47:53.924 [INFO][4505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-skblz" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--skblz-eth0" Aug 13 00:47:53.964173 kubelet[2686]: E0813 00:47:53.964054 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:53.966004 kubelet[2686]: E0813 00:47:53.964568 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:53.966042 containerd[1524]: time="2025-08-13T00:47:53.964820361Z" level=info msg="connecting to shim ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae" address="unix:///run/containerd/s/842119d35753fbc4d599ba29e794b8e907499f338dfeac6b6139406f7cd2da4d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:53.994220 kubelet[2686]: I0813 00:47:53.993925 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2fk4r" podStartSLOduration=43.993901758 podStartE2EDuration="43.993901758s" podCreationTimestamp="2025-08-13 00:47:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:47:53.992271432 +0000 UTC m=+49.622489895" watchObservedRunningTime="2025-08-13 00:47:53.993901758 +0000 UTC m=+49.624120213" Aug 13 00:47:54.023334 systemd[1]: Started cri-containerd-ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae.scope - libcontainer container ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae. Aug 13 00:47:54.101414 containerd[1524]: time="2025-08-13T00:47:54.101342232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b7c9845d-k54mx,Uid:8bce8461-34c3-42fd-82fd-7820d68ff1b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595\"" Aug 13 00:47:54.138343 containerd[1524]: time="2025-08-13T00:47:54.138276003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9795f7446-skblz,Uid:52260ea3-c039-482f-aa45-c9fe8097193a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae\"" Aug 13 00:47:54.169618 systemd-networkd[1452]: calidb9d9a63cc4: Gained IPv6LL Aug 13 00:47:54.541521 containerd[1524]: time="2025-08-13T00:47:54.541157566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9795f7446-8x9gq,Uid:f5957a81-5c65-4759-b9bd-1931ea6ed193,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:47:54.688652 systemd-networkd[1452]: cali5c8c0528de1: Link UP Aug 13 00:47:54.689947 systemd-networkd[1452]: cali5c8c0528de1: Gained carrier Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.579 [INFO][4653] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.592 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0 calico-apiserver-9795f7446- calico-apiserver f5957a81-5c65-4759-b9bd-1931ea6ed193 866 0 2025-08-13 00:47:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9795f7446 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.1.0-a-508df13d84 calico-apiserver-9795f7446-8x9gq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5c8c0528de1 [] [] }} ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-8x9gq" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.593 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-8x9gq" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.628 [INFO][4665] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" HandleID="k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.628 [INFO][4665] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" HandleID="k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.1.0-a-508df13d84", "pod":"calico-apiserver-9795f7446-8x9gq", "timestamp":"2025-08-13 00:47:54.628121596 +0000 UTC"}, Hostname:"ci-4372.1.0-a-508df13d84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.628 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.628 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.628 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-a-508df13d84' Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.637 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.644 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.650 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.652 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.655 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.655 [INFO][4665] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.658 [INFO][4665] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.665 [INFO][4665] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.678 [INFO][4665] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.7/26] block=192.168.126.0/26 handle="k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.679 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.7/26] handle="k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.679 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:54.714522 containerd[1524]: 2025-08-13 00:47:54.679 [INFO][4665] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.7/26] IPv6=[] ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" HandleID="k8s-pod-network.a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Workload="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" Aug 13 00:47:54.717974 containerd[1524]: 2025-08-13 00:47:54.683 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-8x9gq" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0", GenerateName:"calico-apiserver-9795f7446-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5957a81-5c65-4759-b9bd-1931ea6ed193", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9795f7446", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"", Pod:"calico-apiserver-9795f7446-8x9gq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c8c0528de1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:54.717974 containerd[1524]: 2025-08-13 00:47:54.684 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.7/32] ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-8x9gq" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" Aug 13 00:47:54.717974 containerd[1524]: 2025-08-13 00:47:54.684 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c8c0528de1 ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-8x9gq" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" Aug 13 00:47:54.717974 containerd[1524]: 2025-08-13 00:47:54.690 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-8x9gq" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" Aug 13 00:47:54.717974 containerd[1524]: 2025-08-13 00:47:54.691 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-8x9gq" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0", GenerateName:"calico-apiserver-9795f7446-", Namespace:"calico-apiserver", SelfLink:"", UID:"f5957a81-5c65-4759-b9bd-1931ea6ed193", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9795f7446", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e", Pod:"calico-apiserver-9795f7446-8x9gq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c8c0528de1", MAC:"8a:ea:97:82:52:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:54.717974 containerd[1524]: 2025-08-13 00:47:54.710 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" Namespace="calico-apiserver" Pod="calico-apiserver-9795f7446-8x9gq" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-calico--apiserver--9795f7446--8x9gq-eth0" Aug 13 00:47:54.781848 containerd[1524]: time="2025-08-13T00:47:54.781796105Z" level=info msg="connecting to shim a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e" address="unix:///run/containerd/s/97b3e4baa907a1e0c51cd9082b6a31472c03abe31b360b02be67134b8a88dab1" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:54.839659 systemd[1]: Started cri-containerd-a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e.scope - libcontainer container a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e. Aug 13 00:47:55.002747 kubelet[2686]: E0813 00:47:55.002374 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:55.036986 containerd[1524]: time="2025-08-13T00:47:55.036878725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9795f7446-8x9gq,Uid:f5957a81-5c65-4759-b9bd-1931ea6ed193,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e\"" Aug 13 00:47:55.257647 systemd-networkd[1452]: calic9a2827f752: Gained IPv6LL Aug 13 00:47:55.543179 containerd[1524]: time="2025-08-13T00:47:55.542975591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zjj9v,Uid:4bb91506-e966-4bb2-9481-a569c1240861,Namespace:calico-system,Attempt:0,}" Aug 13 00:47:55.642651 systemd-networkd[1452]: calia8201a28135: Gained IPv6LL Aug 13 00:47:55.712456 containerd[1524]: time="2025-08-13T00:47:55.712057870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:55.713718 containerd[1524]: time="2025-08-13T00:47:55.713621266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 00:47:55.714125 containerd[1524]: time="2025-08-13T00:47:55.714096375Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:55.716656 containerd[1524]: time="2025-08-13T00:47:55.716610651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:55.718107 containerd[1524]: time="2025-08-13T00:47:55.718061278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.920872071s" Aug 13 00:47:55.718188 containerd[1524]: time="2025-08-13T00:47:55.718111959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 00:47:55.720788 containerd[1524]: time="2025-08-13T00:47:55.720744205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:47:55.723155 containerd[1524]: time="2025-08-13T00:47:55.723111042Z" level=info msg="CreateContainer within sandbox \"1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:47:55.739446 containerd[1524]: time="2025-08-13T00:47:55.737554179Z" level=info msg="Container 467acc84c0ba78cfa6e3df209e08d5bb0805ee23364a2705e17085b622ef885a: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:55.746375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397258345.mount: Deactivated successfully. Aug 13 00:47:55.771951 containerd[1524]: time="2025-08-13T00:47:55.771862175Z" level=info msg="CreateContainer within sandbox \"1e1c19893444c407153a1d7653db19625ab8b5948bb6c880f94a9591ba5bb9f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"467acc84c0ba78cfa6e3df209e08d5bb0805ee23364a2705e17085b622ef885a\"" Aug 13 00:47:55.773816 containerd[1524]: time="2025-08-13T00:47:55.773765355Z" level=info msg="StartContainer for \"467acc84c0ba78cfa6e3df209e08d5bb0805ee23364a2705e17085b622ef885a\"" Aug 13 00:47:55.775906 containerd[1524]: time="2025-08-13T00:47:55.775857869Z" level=info msg="connecting to shim 467acc84c0ba78cfa6e3df209e08d5bb0805ee23364a2705e17085b622ef885a" address="unix:///run/containerd/s/4a200bb7587189bdec734a5627eca6605cfc419551a58c37603cc6feff9184df" protocol=ttrpc version=3 Aug 13 00:47:55.818982 systemd[1]: Started cri-containerd-467acc84c0ba78cfa6e3df209e08d5bb0805ee23364a2705e17085b622ef885a.scope - libcontainer container 467acc84c0ba78cfa6e3df209e08d5bb0805ee23364a2705e17085b622ef885a. Aug 13 00:47:55.835857 kubelet[2686]: I0813 00:47:55.835784 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:47:55.837530 kubelet[2686]: E0813 00:47:55.836363 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:55.870136 systemd-networkd[1452]: cali0b5b4670b7b: Link UP Aug 13 00:47:55.871958 systemd-networkd[1452]: cali0b5b4670b7b: Gained carrier Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.672 [INFO][4755] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.695 [INFO][4755] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0 goldmane-58fd7646b9- calico-system 4bb91506-e966-4bb2-9481-a569c1240861 870 0 2025-08-13 00:47:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.1.0-a-508df13d84 goldmane-58fd7646b9-zjj9v eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0b5b4670b7b [] [] }} ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Namespace="calico-system" Pod="goldmane-58fd7646b9-zjj9v" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.695 [INFO][4755] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Namespace="calico-system" Pod="goldmane-58fd7646b9-zjj9v" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.750 [INFO][4767] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" HandleID="k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Workload="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.750 [INFO][4767] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" HandleID="k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Workload="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.1.0-a-508df13d84", "pod":"goldmane-58fd7646b9-zjj9v", "timestamp":"2025-08-13 00:47:55.750558989 +0000 UTC"}, Hostname:"ci-4372.1.0-a-508df13d84", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.751 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.751 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.751 [INFO][4767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.1.0-a-508df13d84' Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.762 [INFO][4767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.771 [INFO][4767] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.791 [INFO][4767] ipam/ipam.go 511: Trying affinity for 192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.797 [INFO][4767] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.806 [INFO][4767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.806 [INFO][4767] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.812 [INFO][4767] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.842 [INFO][4767] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.857 [INFO][4767] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.126.8/26] block=192.168.126.0/26 handle="k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.857 [INFO][4767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.8/26] handle="k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" host="ci-4372.1.0-a-508df13d84" Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.857 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:47:55.918887 containerd[1524]: 2025-08-13 00:47:55.857 [INFO][4767] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.8/26] IPv6=[] ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" HandleID="k8s-pod-network.36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Workload="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" Aug 13 00:47:55.920440 containerd[1524]: 2025-08-13 00:47:55.864 [INFO][4755] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Namespace="calico-system" Pod="goldmane-58fd7646b9-zjj9v" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4bb91506-e966-4bb2-9481-a569c1240861", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"", Pod:"goldmane-58fd7646b9-zjj9v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b5b4670b7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:55.920440 containerd[1524]: 2025-08-13 00:47:55.864 [INFO][4755] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.8/32] ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Namespace="calico-system" Pod="goldmane-58fd7646b9-zjj9v" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" Aug 13 00:47:55.920440 containerd[1524]: 2025-08-13 00:47:55.864 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b5b4670b7b ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Namespace="calico-system" Pod="goldmane-58fd7646b9-zjj9v" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" Aug 13 00:47:55.920440 containerd[1524]: 2025-08-13 00:47:55.874 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Namespace="calico-system" Pod="goldmane-58fd7646b9-zjj9v" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" Aug 13 00:47:55.920440 containerd[1524]: 2025-08-13 00:47:55.880 [INFO][4755] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Namespace="calico-system" Pod="goldmane-58fd7646b9-zjj9v" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4bb91506-e966-4bb2-9481-a569c1240861", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 47, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.1.0-a-508df13d84", ContainerID:"36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff", Pod:"goldmane-58fd7646b9-zjj9v", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b5b4670b7b", MAC:"de:b7:51:26:94:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:47:55.920440 containerd[1524]: 2025-08-13 00:47:55.901 [INFO][4755] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" Namespace="calico-system" Pod="goldmane-58fd7646b9-zjj9v" WorkloadEndpoint="ci--4372.1.0--a--508df13d84-k8s-goldmane--58fd7646b9--zjj9v-eth0" Aug 13 00:47:55.966983 containerd[1524]: time="2025-08-13T00:47:55.966919019Z" level=info msg="connecting to shim 36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff" address="unix:///run/containerd/s/b83db51a360636eb2044d557bfe44e4cc824f0c0531d7937151a61ef7e42ceb9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 00:47:56.037006 systemd[1]: Started cri-containerd-36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff.scope - libcontainer container 36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff. Aug 13 00:47:56.047803 kubelet[2686]: E0813 00:47:56.047750 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:56.049248 kubelet[2686]: E0813 00:47:56.049057 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:47:56.066081 containerd[1524]: time="2025-08-13T00:47:56.066029714Z" level=info msg="StartContainer for \"467acc84c0ba78cfa6e3df209e08d5bb0805ee23364a2705e17085b622ef885a\" returns successfully" Aug 13 00:47:56.195363 containerd[1524]: time="2025-08-13T00:47:56.195135080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zjj9v,Uid:4bb91506-e966-4bb2-9481-a569c1240861,Namespace:calico-system,Attempt:0,} returns sandbox id \"36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff\"" Aug 13 00:47:56.346632 systemd-networkd[1452]: cali5c8c0528de1: Gained IPv6LL Aug 13 00:47:56.915442 kubelet[2686]: I0813 00:47:56.914517 2686 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:47:56.915442 kubelet[2686]: I0813 00:47:56.914575 2686 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:47:57.178350 systemd-networkd[1452]: cali0b5b4670b7b: Gained IPv6LL Aug 13 00:47:57.887790 systemd-networkd[1452]: vxlan.calico: Link UP Aug 13 00:47:57.890029 systemd-networkd[1452]: vxlan.calico: Gained carrier Aug 13 00:47:58.807773 containerd[1524]: time="2025-08-13T00:47:58.807573134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:58.808381 containerd[1524]: time="2025-08-13T00:47:58.808344020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 00:47:58.809449 containerd[1524]: time="2025-08-13T00:47:58.808890198Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:58.812521 containerd[1524]: time="2025-08-13T00:47:58.812460021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:47:58.813800 containerd[1524]: time="2025-08-13T00:47:58.813742376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.092946794s" Aug 13 00:47:58.814373 containerd[1524]: time="2025-08-13T00:47:58.813963084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 00:47:58.817092 containerd[1524]: time="2025-08-13T00:47:58.816601890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:47:58.853235 containerd[1524]: time="2025-08-13T00:47:58.852776071Z" level=info msg="CreateContainer within sandbox \"a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:47:58.888287 containerd[1524]: time="2025-08-13T00:47:58.887713383Z" level=info msg="Container c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:47:58.891870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451590957.mount: Deactivated successfully. Aug 13 00:47:58.903041 containerd[1524]: time="2025-08-13T00:47:58.902973822Z" level=info msg="CreateContainer within sandbox \"a4776eac847075f79d66cae852365de84dee349533ddb039b990a9879cd8c595\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\"" Aug 13 00:47:58.904648 containerd[1524]: time="2025-08-13T00:47:58.904521586Z" level=info msg="StartContainer for \"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\"" Aug 13 00:47:58.907544 containerd[1524]: time="2025-08-13T00:47:58.907342144Z" level=info msg="connecting to shim c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b" address="unix:///run/containerd/s/ef4d474dfc4db9d17598b493db748bc1e8d73de9f4b0db0e7218f3fa3502b0ad" protocol=ttrpc version=3 Aug 13 00:47:58.951313 systemd[1]: Started cri-containerd-c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b.scope - libcontainer container c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b. Aug 13 00:47:59.031573 containerd[1524]: time="2025-08-13T00:47:59.031533931Z" level=info msg="StartContainer for \"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\" returns successfully" Aug 13 00:47:59.130877 kubelet[2686]: I0813 00:47:59.114121 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tpcdb" podStartSLOduration=27.518180849 podStartE2EDuration="31.113402551s" podCreationTimestamp="2025-08-13 00:47:28 +0000 UTC" firstStartedPulling="2025-08-13 00:47:52.124588073 +0000 UTC m=+47.754806524" lastFinishedPulling="2025-08-13 00:47:55.719809794 +0000 UTC m=+51.350028226" observedRunningTime="2025-08-13 00:47:57.107225715 +0000 UTC m=+52.737444168" watchObservedRunningTime="2025-08-13 00:47:59.113402551 +0000 UTC m=+54.743621004" Aug 13 00:47:59.206595 containerd[1524]: time="2025-08-13T00:47:59.206547942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\" id:\"8d3c5f0a1febaf2b6f3a0f0d56a54466e79752506f7488f28bcdeaef78ce6de1\" pid:5053 exit_status:1 exited_at:{seconds:1755046079 nanos:206082472}" Aug 13 00:47:59.865734 systemd-networkd[1452]: vxlan.calico: Gained IPv6LL Aug 13 00:48:00.159410 containerd[1524]: time="2025-08-13T00:48:00.159124045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\" id:\"4ef1ca12a42d024c8206c4f8b4651f7a22a5098b7b228f7c72c8bc23b96425ef\" pid:5077 exited_at:{seconds:1755046080 nanos:155416059}" Aug 13 00:48:00.194719 kubelet[2686]: I0813 00:48:00.194633 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59b7c9845d-k54mx" podStartSLOduration=27.485171272 podStartE2EDuration="32.194337421s" podCreationTimestamp="2025-08-13 00:47:28 +0000 UTC" firstStartedPulling="2025-08-13 00:47:54.106794507 +0000 UTC m=+49.737012952" lastFinishedPulling="2025-08-13 00:47:58.815960653 +0000 UTC m=+54.446179101" observedRunningTime="2025-08-13 00:47:59.131017549 +0000 UTC m=+54.761235999" watchObservedRunningTime="2025-08-13 00:48:00.194337421 +0000 UTC m=+55.824555901" Aug 13 00:48:02.568814 containerd[1524]: time="2025-08-13T00:48:02.568273682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\" id:\"fed8367f5542cfea651edd3f7de82744a71af07667b279b9ff2007b45cb23b01\" pid:5104 exited_at:{seconds:1755046082 nanos:567443273}" Aug 13 00:48:03.502474 containerd[1524]: time="2025-08-13T00:48:03.502273288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:03.515261 containerd[1524]: time="2025-08-13T00:48:03.514943230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 00:48:03.517971 containerd[1524]: time="2025-08-13T00:48:03.517912455Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:03.521905 containerd[1524]: time="2025-08-13T00:48:03.521628200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:03.525460 containerd[1524]: time="2025-08-13T00:48:03.524108436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.707450202s" Aug 13 00:48:03.525973 containerd[1524]: time="2025-08-13T00:48:03.525793548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:48:03.527940 containerd[1524]: time="2025-08-13T00:48:03.527737267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:48:03.532937 containerd[1524]: time="2025-08-13T00:48:03.532731264Z" level=info msg="CreateContainer within sandbox \"ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:48:03.555464 containerd[1524]: time="2025-08-13T00:48:03.552838099Z" level=info msg="Container 5f48135fe7ca7f9ba80c4927ec7ebabc6c84611e1026e517bc60de11dc4be0f5: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:03.603905 containerd[1524]: time="2025-08-13T00:48:03.603159747Z" level=info msg="CreateContainer within sandbox \"ec71951eea23c25099370cacdde24287e3975c802e70fcaccf6d337733e859ae\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f48135fe7ca7f9ba80c4927ec7ebabc6c84611e1026e517bc60de11dc4be0f5\"" Aug 13 00:48:03.607142 containerd[1524]: time="2025-08-13T00:48:03.607084417Z" level=info msg="StartContainer for \"5f48135fe7ca7f9ba80c4927ec7ebabc6c84611e1026e517bc60de11dc4be0f5\"" Aug 13 00:48:03.611868 containerd[1524]: time="2025-08-13T00:48:03.611804255Z" level=info msg="connecting to shim 5f48135fe7ca7f9ba80c4927ec7ebabc6c84611e1026e517bc60de11dc4be0f5" address="unix:///run/containerd/s/842119d35753fbc4d599ba29e794b8e907499f338dfeac6b6139406f7cd2da4d" protocol=ttrpc version=3 Aug 13 00:48:03.682797 systemd[1]: Started cri-containerd-5f48135fe7ca7f9ba80c4927ec7ebabc6c84611e1026e517bc60de11dc4be0f5.scope - libcontainer container 5f48135fe7ca7f9ba80c4927ec7ebabc6c84611e1026e517bc60de11dc4be0f5. Aug 13 00:48:03.829130 containerd[1524]: time="2025-08-13T00:48:03.828798829Z" level=info msg="StartContainer for \"5f48135fe7ca7f9ba80c4927ec7ebabc6c84611e1026e517bc60de11dc4be0f5\" returns successfully" Aug 13 00:48:04.251605 containerd[1524]: time="2025-08-13T00:48:04.249346795Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:04.254062 containerd[1524]: time="2025-08-13T00:48:04.254012036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:48:04.257404 containerd[1524]: time="2025-08-13T00:48:04.257348353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 729.486822ms" Aug 13 00:48:04.257660 containerd[1524]: time="2025-08-13T00:48:04.257471974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 00:48:04.259981 containerd[1524]: time="2025-08-13T00:48:04.259847918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:48:04.266455 containerd[1524]: time="2025-08-13T00:48:04.265691558Z" level=info msg="CreateContainer within sandbox \"a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:48:04.277587 containerd[1524]: time="2025-08-13T00:48:04.277539317Z" level=info msg="Container b2eaefbc00d387c49b77e013cd5ffbea71f3d3ece791cb2585c1134fb3ae2ed7: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:04.409451 containerd[1524]: time="2025-08-13T00:48:04.409217725Z" level=info msg="CreateContainer within sandbox \"a6e82e7bc554b26793c2fd042014db023f0785ceb829ed8b3778176f6599aa3e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b2eaefbc00d387c49b77e013cd5ffbea71f3d3ece791cb2585c1134fb3ae2ed7\"" Aug 13 00:48:04.411306 containerd[1524]: time="2025-08-13T00:48:04.411233565Z" level=info msg="StartContainer for \"b2eaefbc00d387c49b77e013cd5ffbea71f3d3ece791cb2585c1134fb3ae2ed7\"" Aug 13 00:48:04.413756 containerd[1524]: time="2025-08-13T00:48:04.413711591Z" level=info msg="connecting to shim b2eaefbc00d387c49b77e013cd5ffbea71f3d3ece791cb2585c1134fb3ae2ed7" address="unix:///run/containerd/s/97b3e4baa907a1e0c51cd9082b6a31472c03abe31b360b02be67134b8a88dab1" protocol=ttrpc version=3 Aug 13 00:48:04.453678 systemd[1]: Started cri-containerd-b2eaefbc00d387c49b77e013cd5ffbea71f3d3ece791cb2585c1134fb3ae2ed7.scope - libcontainer container b2eaefbc00d387c49b77e013cd5ffbea71f3d3ece791cb2585c1134fb3ae2ed7. Aug 13 00:48:04.885045 containerd[1524]: time="2025-08-13T00:48:04.884955644Z" level=info msg="StartContainer for \"b2eaefbc00d387c49b77e013cd5ffbea71f3d3ece791cb2585c1134fb3ae2ed7\" returns successfully" Aug 13 00:48:05.189647 kubelet[2686]: I0813 00:48:05.185217 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9795f7446-8x9gq" podStartSLOduration=31.971356314 podStartE2EDuration="41.185189304s" podCreationTimestamp="2025-08-13 00:47:24 +0000 UTC" firstStartedPulling="2025-08-13 00:47:55.044955042 +0000 UTC m=+50.675173487" lastFinishedPulling="2025-08-13 00:48:04.258788032 +0000 UTC m=+59.889006477" observedRunningTime="2025-08-13 00:48:05.184926195 +0000 UTC m=+60.815144649" watchObservedRunningTime="2025-08-13 00:48:05.185189304 +0000 UTC m=+60.815407756" Aug 13 00:48:05.189647 kubelet[2686]: I0813 00:48:05.189059 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9795f7446-skblz" podStartSLOduration=31.802302668 podStartE2EDuration="41.189032247s" podCreationTimestamp="2025-08-13 00:47:24 +0000 UTC" firstStartedPulling="2025-08-13 00:47:54.140227788 +0000 UTC m=+49.770446226" lastFinishedPulling="2025-08-13 00:48:03.526957373 +0000 UTC m=+59.157175805" observedRunningTime="2025-08-13 00:48:04.1888389 +0000 UTC m=+59.819057352" watchObservedRunningTime="2025-08-13 00:48:05.189032247 +0000 UTC m=+60.819250698" Aug 13 00:48:06.182320 kubelet[2686]: I0813 00:48:06.182167 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:48:08.179037 containerd[1524]: time="2025-08-13T00:48:08.178584442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\" id:\"f6ff160a08a0f7528cc956ed3ea55c7a0ccacf09241a98c64b3136b27bf8e920\" pid:5221 exited_at:{seconds:1755046088 nanos:174243874}" Aug 13 00:48:08.877639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293345465.mount: Deactivated successfully. Aug 13 00:48:10.076922 containerd[1524]: time="2025-08-13T00:48:10.073990375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:10.095511 containerd[1524]: time="2025-08-13T00:48:10.095446757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 00:48:10.136472 containerd[1524]: time="2025-08-13T00:48:10.135476539Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:10.137565 containerd[1524]: time="2025-08-13T00:48:10.137498928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:48:10.138759 containerd[1524]: time="2025-08-13T00:48:10.138486392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.878337962s" Aug 13 00:48:10.138759 containerd[1524]: time="2025-08-13T00:48:10.138527088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 00:48:10.191000 containerd[1524]: time="2025-08-13T00:48:10.190944055Z" level=info msg="CreateContainer within sandbox \"36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:48:10.201192 containerd[1524]: time="2025-08-13T00:48:10.201138569Z" level=info msg="Container 2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518: CDI devices from CRI Config.CDIDevices: []" Aug 13 00:48:10.216385 containerd[1524]: time="2025-08-13T00:48:10.216328523Z" level=info msg="CreateContainer within sandbox \"36c720b04fa790e5665cb4be0a3f55fef94a1414f5751df3614a3540f4abb6ff\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518\"" Aug 13 00:48:10.217004 containerd[1524]: time="2025-08-13T00:48:10.216968535Z" level=info msg="StartContainer for \"2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518\"" Aug 13 00:48:10.220746 containerd[1524]: time="2025-08-13T00:48:10.220695229Z" level=info msg="connecting to shim 2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518" address="unix:///run/containerd/s/b83db51a360636eb2044d557bfe44e4cc824f0c0531d7937151a61ef7e42ceb9" protocol=ttrpc version=3 Aug 13 00:48:10.289410 systemd[1]: Started cri-containerd-2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518.scope - libcontainer container 2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518. Aug 13 00:48:10.750439 containerd[1524]: time="2025-08-13T00:48:10.750387343Z" level=info msg="StartContainer for \"2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518\" returns successfully" Aug 13 00:48:11.463621 systemd[1]: Started sshd@7-134.199.224.26:22-139.178.68.195:54860.service - OpenSSH per-connection server daemon (139.178.68.195:54860). Aug 13 00:48:11.535612 kubelet[2686]: I0813 00:48:11.535522 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-zjj9v" podStartSLOduration=30.592831049 podStartE2EDuration="44.535489471s" podCreationTimestamp="2025-08-13 00:47:27 +0000 UTC" firstStartedPulling="2025-08-13 00:47:56.199720086 +0000 UTC m=+51.829938546" lastFinishedPulling="2025-08-13 00:48:10.142378524 +0000 UTC m=+65.772596968" observedRunningTime="2025-08-13 00:48:11.52699252 +0000 UTC m=+67.157210974" watchObservedRunningTime="2025-08-13 00:48:11.535489471 +0000 UTC m=+67.165707927" Aug 13 00:48:11.680846 sshd[5310]: Accepted publickey for core from 139.178.68.195 port 54860 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:11.685339 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:11.695518 systemd-logind[1499]: New session 8 of user core. Aug 13 00:48:11.701701 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:48:11.780503 containerd[1524]: time="2025-08-13T00:48:11.780255277Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518\" id:\"49c3bc23f8b431b143319db421ad8646fe25ddfd5098e40d543abd774edafe23\" pid:5298 exit_status:1 exited_at:{seconds:1755046091 nanos:760028219}" Aug 13 00:48:12.604552 sshd[5317]: Connection closed by 139.178.68.195 port 54860 Aug 13 00:48:12.604928 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:12.616052 systemd[1]: sshd@7-134.199.224.26:22-139.178.68.195:54860.service: Deactivated successfully. Aug 13 00:48:12.621989 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:48:12.624554 systemd-logind[1499]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:48:12.628297 systemd-logind[1499]: Removed session 8. Aug 13 00:48:12.727619 containerd[1524]: time="2025-08-13T00:48:12.726898827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518\" id:\"69f64e8dfe4c716deede3797be65d2c4031fadd7cb58691696657ffabdf0061d\" pid:5339 exit_status:1 exited_at:{seconds:1755046092 nanos:726260631}" Aug 13 00:48:17.629207 systemd[1]: Started sshd@8-134.199.224.26:22-139.178.68.195:54866.service - OpenSSH per-connection server daemon (139.178.68.195:54866). Aug 13 00:48:17.745570 sshd[5354]: Accepted publickey for core from 139.178.68.195 port 54866 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:17.748046 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:17.760904 systemd-logind[1499]: New session 9 of user core. Aug 13 00:48:17.770206 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:48:18.043625 sshd[5356]: Connection closed by 139.178.68.195 port 54866 Aug 13 00:48:18.044320 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:18.049798 systemd[1]: sshd@8-134.199.224.26:22-139.178.68.195:54866.service: Deactivated successfully. Aug 13 00:48:18.053330 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:48:18.057727 systemd-logind[1499]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:48:18.059970 systemd-logind[1499]: Removed session 9. Aug 13 00:48:18.570305 kubelet[2686]: E0813 00:48:18.569907 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:23.059346 systemd[1]: Started sshd@9-134.199.224.26:22-139.178.68.195:53976.service - OpenSSH per-connection server daemon (139.178.68.195:53976). Aug 13 00:48:23.150310 sshd[5378]: Accepted publickey for core from 139.178.68.195 port 53976 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:23.154214 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:23.163493 systemd-logind[1499]: New session 10 of user core. Aug 13 00:48:23.169888 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:48:23.428810 sshd[5380]: Connection closed by 139.178.68.195 port 53976 Aug 13 00:48:23.432296 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:23.448010 systemd[1]: sshd@9-134.199.224.26:22-139.178.68.195:53976.service: Deactivated successfully. Aug 13 00:48:23.454206 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:48:23.457034 systemd-logind[1499]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:48:23.464920 systemd[1]: Started sshd@10-134.199.224.26:22-139.178.68.195:53978.service - OpenSSH per-connection server daemon (139.178.68.195:53978). Aug 13 00:48:23.466445 systemd-logind[1499]: Removed session 10. Aug 13 00:48:23.572080 sshd[5393]: Accepted publickey for core from 139.178.68.195 port 53978 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:23.575551 sshd-session[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:23.583829 systemd-logind[1499]: New session 11 of user core. Aug 13 00:48:23.590100 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:48:23.885818 sshd[5395]: Connection closed by 139.178.68.195 port 53978 Aug 13 00:48:23.889414 sshd-session[5393]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:23.904732 systemd[1]: sshd@10-134.199.224.26:22-139.178.68.195:53978.service: Deactivated successfully. Aug 13 00:48:23.912746 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:48:23.915436 systemd-logind[1499]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:48:23.923406 systemd[1]: Started sshd@11-134.199.224.26:22-139.178.68.195:53984.service - OpenSSH per-connection server daemon (139.178.68.195:53984). Aug 13 00:48:23.925909 systemd-logind[1499]: Removed session 11. Aug 13 00:48:24.038983 sshd[5406]: Accepted publickey for core from 139.178.68.195 port 53984 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:24.042202 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:24.052330 systemd-logind[1499]: New session 12 of user core. Aug 13 00:48:24.056708 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:48:24.246726 sshd[5409]: Connection closed by 139.178.68.195 port 53984 Aug 13 00:48:24.247654 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:24.254265 systemd[1]: sshd@11-134.199.224.26:22-139.178.68.195:53984.service: Deactivated successfully. Aug 13 00:48:24.261132 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:48:24.266409 systemd-logind[1499]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:48:24.269203 systemd-logind[1499]: Removed session 12. Aug 13 00:48:24.579626 kubelet[2686]: E0813 00:48:24.578635 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:29.269246 systemd[1]: Started sshd@12-134.199.224.26:22-139.178.68.195:53992.service - OpenSSH per-connection server daemon (139.178.68.195:53992). Aug 13 00:48:29.405619 sshd[5426]: Accepted publickey for core from 139.178.68.195 port 53992 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:29.409860 sshd-session[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:29.419109 systemd-logind[1499]: New session 13 of user core. Aug 13 00:48:29.425832 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:48:29.722762 sshd[5428]: Connection closed by 139.178.68.195 port 53992 Aug 13 00:48:29.726676 sshd-session[5426]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:29.738328 systemd-logind[1499]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:48:29.741023 systemd[1]: sshd@12-134.199.224.26:22-139.178.68.195:53992.service: Deactivated successfully. Aug 13 00:48:29.746680 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:48:29.750376 systemd-logind[1499]: Removed session 13. Aug 13 00:48:32.672039 containerd[1524]: time="2025-08-13T00:48:32.671874544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\" id:\"fef2fd8e6eccd23d779d5f6942bcc28de0dac1a39c5dedbc3f1235ae2ada7260\" pid:5452 exit_status:1 exited_at:{seconds:1755046112 nanos:671145989}" Aug 13 00:48:34.743241 systemd[1]: Started sshd@13-134.199.224.26:22-139.178.68.195:43138.service - OpenSSH per-connection server daemon (139.178.68.195:43138). Aug 13 00:48:34.933986 sshd[5464]: Accepted publickey for core from 139.178.68.195 port 43138 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:34.938265 sshd-session[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:34.949034 systemd-logind[1499]: New session 14 of user core. Aug 13 00:48:34.958767 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:48:35.231133 sshd[5466]: Connection closed by 139.178.68.195 port 43138 Aug 13 00:48:35.232275 sshd-session[5464]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:35.239984 systemd-logind[1499]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:48:35.240706 systemd[1]: sshd@13-134.199.224.26:22-139.178.68.195:43138.service: Deactivated successfully. Aug 13 00:48:35.248577 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:48:35.255178 systemd-logind[1499]: Removed session 14. Aug 13 00:48:35.544642 kubelet[2686]: E0813 00:48:35.542831 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:36.541198 kubelet[2686]: E0813 00:48:36.541140 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:48:37.042495 kubelet[2686]: I0813 00:48:37.041963 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:48:38.130644 containerd[1524]: time="2025-08-13T00:48:38.130590552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\" id:\"517bba711d688021b4bf77ca512065a254fc44068c9c3be9136e749136d811ec\" pid:5492 exited_at:{seconds:1755046118 nanos:130054423}" Aug 13 00:48:39.816899 containerd[1524]: time="2025-08-13T00:48:39.816817801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518\" id:\"5241692312b4af4ddb1cc22b4000e2676bdd55df48c3293ffacca86635d1b6a5\" pid:5523 exited_at:{seconds:1755046119 nanos:816180195}" Aug 13 00:48:40.248740 systemd[1]: Started sshd@14-134.199.224.26:22-139.178.68.195:55842.service - OpenSSH per-connection server daemon (139.178.68.195:55842). Aug 13 00:48:40.357446 sshd[5536]: Accepted publickey for core from 139.178.68.195 port 55842 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:40.358637 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:40.367530 systemd-logind[1499]: New session 15 of user core. Aug 13 00:48:40.374698 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:48:40.657643 sshd[5538]: Connection closed by 139.178.68.195 port 55842 Aug 13 00:48:40.659495 sshd-session[5536]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:40.672524 systemd[1]: sshd@14-134.199.224.26:22-139.178.68.195:55842.service: Deactivated successfully. Aug 13 00:48:40.676486 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:48:40.679960 systemd-logind[1499]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:48:40.684150 systemd-logind[1499]: Removed session 15. Aug 13 00:48:45.675538 systemd[1]: Started sshd@15-134.199.224.26:22-139.178.68.195:55852.service - OpenSSH per-connection server daemon (139.178.68.195:55852). Aug 13 00:48:45.866563 sshd[5553]: Accepted publickey for core from 139.178.68.195 port 55852 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:45.869488 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:45.878206 systemd-logind[1499]: New session 16 of user core. Aug 13 00:48:45.885674 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:48:46.386481 sshd[5555]: Connection closed by 139.178.68.195 port 55852 Aug 13 00:48:46.390198 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:46.399646 systemd[1]: sshd@15-134.199.224.26:22-139.178.68.195:55852.service: Deactivated successfully. Aug 13 00:48:46.404612 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:48:46.406359 systemd-logind[1499]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:48:46.413596 systemd[1]: Started sshd@16-134.199.224.26:22-139.178.68.195:55856.service - OpenSSH per-connection server daemon (139.178.68.195:55856). Aug 13 00:48:46.420291 systemd-logind[1499]: Removed session 16. Aug 13 00:48:46.503761 sshd[5567]: Accepted publickey for core from 139.178.68.195 port 55856 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:46.506111 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:46.518739 systemd-logind[1499]: New session 17 of user core. Aug 13 00:48:46.523711 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:48:46.902763 sshd[5569]: Connection closed by 139.178.68.195 port 55856 Aug 13 00:48:46.903604 sshd-session[5567]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:46.917127 systemd[1]: sshd@16-134.199.224.26:22-139.178.68.195:55856.service: Deactivated successfully. Aug 13 00:48:46.923583 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:48:46.925606 systemd-logind[1499]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:48:46.937254 systemd[1]: Started sshd@17-134.199.224.26:22-139.178.68.195:55864.service - OpenSSH per-connection server daemon (139.178.68.195:55864). Aug 13 00:48:46.940283 systemd-logind[1499]: Removed session 17. Aug 13 00:48:47.031643 sshd[5579]: Accepted publickey for core from 139.178.68.195 port 55864 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:47.033982 sshd-session[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:47.045415 systemd-logind[1499]: New session 18 of user core. Aug 13 00:48:47.049693 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:48:49.983979 sshd[5581]: Connection closed by 139.178.68.195 port 55864 Aug 13 00:48:50.015533 sshd-session[5579]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:50.046487 systemd[1]: sshd@17-134.199.224.26:22-139.178.68.195:55864.service: Deactivated successfully. Aug 13 00:48:50.049197 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:48:50.049457 systemd[1]: session-18.scope: Consumed 798ms CPU time, 76.5M memory peak. Aug 13 00:48:50.058745 systemd-logind[1499]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:48:50.063363 systemd[1]: Started sshd@18-134.199.224.26:22-139.178.68.195:32826.service - OpenSSH per-connection server daemon (139.178.68.195:32826). Aug 13 00:48:50.069579 systemd-logind[1499]: Removed session 18. Aug 13 00:48:50.224566 sshd[5599]: Accepted publickey for core from 139.178.68.195 port 32826 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:50.227639 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:50.236231 systemd-logind[1499]: New session 19 of user core. Aug 13 00:48:50.243767 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:48:51.261461 sshd[5604]: Connection closed by 139.178.68.195 port 32826 Aug 13 00:48:51.261908 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:51.283832 systemd[1]: sshd@18-134.199.224.26:22-139.178.68.195:32826.service: Deactivated successfully. Aug 13 00:48:51.288340 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:48:51.290607 systemd-logind[1499]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:48:51.298927 systemd-logind[1499]: Removed session 19. Aug 13 00:48:51.301867 systemd[1]: Started sshd@19-134.199.224.26:22-139.178.68.195:32842.service - OpenSSH per-connection server daemon (139.178.68.195:32842). Aug 13 00:48:51.404338 sshd[5615]: Accepted publickey for core from 139.178.68.195 port 32842 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:51.409891 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:51.427859 systemd-logind[1499]: New session 20 of user core. Aug 13 00:48:51.433733 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:48:51.753819 sshd[5617]: Connection closed by 139.178.68.195 port 32842 Aug 13 00:48:51.753536 sshd-session[5615]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:51.763361 systemd[1]: sshd@19-134.199.224.26:22-139.178.68.195:32842.service: Deactivated successfully. Aug 13 00:48:51.772150 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:48:51.776318 systemd-logind[1499]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:48:51.780650 systemd-logind[1499]: Removed session 20. Aug 13 00:48:52.955730 containerd[1524]: time="2025-08-13T00:48:52.955665790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\" id:\"bf3e9ec1c555b82dee7e9a057a545453591fb6474f9356fb766fce3cebe250e1\" pid:5640 exited_at:{seconds:1755046132 nanos:949949612}" Aug 13 00:48:53.613871 containerd[1524]: time="2025-08-13T00:48:53.600724740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518\" id:\"c34d89675baff1fcfee6bcbaaafa71b2891840c3281997e357f29bacdcce0b64\" pid:5662 exited_at:{seconds:1755046133 nanos:599671864}" Aug 13 00:48:56.771429 systemd[1]: Started sshd@20-134.199.224.26:22-139.178.68.195:32850.service - OpenSSH per-connection server daemon (139.178.68.195:32850). Aug 13 00:48:56.915149 sshd[5677]: Accepted publickey for core from 139.178.68.195 port 32850 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:48:56.922343 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:48:56.940160 systemd-logind[1499]: New session 21 of user core. Aug 13 00:48:56.947986 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:48:57.346923 sshd[5679]: Connection closed by 139.178.68.195 port 32850 Aug 13 00:48:57.347779 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Aug 13 00:48:57.356110 systemd[1]: sshd@20-134.199.224.26:22-139.178.68.195:32850.service: Deactivated successfully. Aug 13 00:48:57.363635 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:48:57.367250 systemd-logind[1499]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:48:57.370615 systemd-logind[1499]: Removed session 21. Aug 13 00:49:02.374570 systemd[1]: Started sshd@21-134.199.224.26:22-139.178.68.195:53768.service - OpenSSH per-connection server daemon (139.178.68.195:53768). Aug 13 00:49:02.481667 sshd[5709]: Accepted publickey for core from 139.178.68.195 port 53768 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:49:02.486646 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:02.497861 systemd-logind[1499]: New session 22 of user core. Aug 13 00:49:02.505803 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:49:02.821745 sshd[5715]: Connection closed by 139.178.68.195 port 53768 Aug 13 00:49:02.822139 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:02.833598 systemd-logind[1499]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:49:02.834882 systemd[1]: sshd@21-134.199.224.26:22-139.178.68.195:53768.service: Deactivated successfully. Aug 13 00:49:02.842131 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:49:02.846466 systemd-logind[1499]: Removed session 22. Aug 13 00:49:03.033284 containerd[1524]: time="2025-08-13T00:49:03.033229038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d0e7b3cd0dc91ce23cd3877b542d11957c82604645f6a42e8c2851282cdef31\" id:\"389032393fe57eb5ba8d49b9b843390de29003623d5d66de80f5edbee1bf3f10\" pid:5702 exited_at:{seconds:1755046143 nanos:30008116}" Aug 13 00:49:03.551980 kubelet[2686]: E0813 00:49:03.551795 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:49:07.540250 kubelet[2686]: E0813 00:49:07.540070 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:49:07.846404 systemd[1]: Started sshd@22-134.199.224.26:22-139.178.68.195:53776.service - OpenSSH per-connection server daemon (139.178.68.195:53776). Aug 13 00:49:08.017603 sshd[5730]: Accepted publickey for core from 139.178.68.195 port 53776 ssh2: RSA SHA256:GCnkoWwwWKa9gmWk48+fNrnGi64gEJwijitdL8eboq4 Aug 13 00:49:08.021530 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:49:08.033514 systemd-logind[1499]: New session 23 of user core. Aug 13 00:49:08.040760 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:49:08.195080 containerd[1524]: time="2025-08-13T00:49:08.195024967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0f760ffe03bca80740ed5d21065e421c947e4ddadc2a92647d62f03730c2f3b\" id:\"2ba22398cf705321ba7f4e810684a29ebb8a4bc7c5e6a489401aceb714b01ae1\" pid:5745 exited_at:{seconds:1755046148 nanos:193311940}" Aug 13 00:49:08.543547 kubelet[2686]: E0813 00:49:08.541330 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:49:08.605151 sshd[5738]: Connection closed by 139.178.68.195 port 53776 Aug 13 00:49:08.608465 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Aug 13 00:49:08.618284 systemd[1]: sshd@22-134.199.224.26:22-139.178.68.195:53776.service: Deactivated successfully. Aug 13 00:49:08.624992 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:49:08.628813 systemd-logind[1499]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:49:08.633002 systemd-logind[1499]: Removed session 23. Aug 13 00:49:10.010276 containerd[1524]: time="2025-08-13T00:49:10.010080347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b196a7cc0cf845a44f52f3c65654e48e99e8f9b8da5d930e77efe7c03e6c518\" id:\"59e3be7f47a2c96500c32719a6670977496a06c6047d6a822f88ef17c12151e2\" pid:5775 exited_at:{seconds:1755046150 nanos:9614335}"