Nov 5 15:49:32.022467 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:49:32.022506 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:49:32.022519 kernel: BIOS-provided physical RAM map: Nov 5 15:49:32.022527 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 5 15:49:32.022534 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 5 15:49:32.022541 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 15:49:32.022549 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 5 15:49:32.022560 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 5 15:49:32.022578 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:49:32.022605 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 15:49:32.022618 kernel: NX (Execute Disable) protection: active Nov 5 15:49:32.022629 kernel: APIC: Static calls initialized Nov 5 15:49:32.022639 kernel: SMBIOS 2.8 present. Nov 5 15:49:32.022649 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 5 15:49:32.022663 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:49:32.022678 kernel: Hypervisor detected: KVM Nov 5 15:49:32.022695 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 5 15:49:32.022706 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:49:32.022717 kernel: kvm-clock: using sched offset of 4092561843 cycles Nov 5 15:49:32.022732 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:49:32.022744 kernel: tsc: Detected 2494.138 MHz processor Nov 5 15:49:32.022758 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:49:32.022772 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:49:32.022787 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 5 15:49:32.022801 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 15:49:32.022814 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:49:32.022825 kernel: ACPI: Early table checksum verification disabled Nov 5 15:49:32.022833 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 5 15:49:32.022842 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:32.022851 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:32.022863 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:32.022871 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 5 15:49:32.022880 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:32.022888 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:32.022897 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:32.022905 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:49:32.022914 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 5 15:49:32.022925 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 5 15:49:32.022934 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 5 15:49:32.022943 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 5 15:49:32.022955 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 5 15:49:32.022964 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 5 15:49:32.022976 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 5 15:49:32.022985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 5 15:49:32.022993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 5 15:49:32.023003 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 5 15:49:32.023012 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 5 15:49:32.023020 kernel: Zone ranges: Nov 5 15:49:32.023032 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:49:32.023041 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 5 15:49:32.023050 kernel: Normal empty Nov 5 15:49:32.023059 kernel: Device empty Nov 5 15:49:32.023068 kernel: Movable zone start for each node Nov 5 15:49:32.023077 kernel: Early memory node ranges Nov 5 15:49:32.023086 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 15:49:32.023095 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 5 15:49:32.023107 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 5 15:49:32.023116 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:49:32.023125 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:49:32.023134 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 5 15:49:32.023143 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:49:32.023171 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:49:32.023185 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:49:32.023200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:49:32.023210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:49:32.023218 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:49:32.023230 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:49:32.023240 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:49:32.023249 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:49:32.023258 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:49:32.023270 kernel: TSC deadline timer available Nov 5 15:49:32.023279 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:49:32.023288 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:49:32.023297 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:49:32.023306 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:49:32.023315 kernel: CPU topo: Num. cores per package: 2 Nov 5 15:49:32.023324 kernel: CPU topo: Num. threads per package: 2 Nov 5 15:49:32.023333 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 15:49:32.023345 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:49:32.023354 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 5 15:49:32.023363 kernel: Booting paravirtualized kernel on KVM Nov 5 15:49:32.023372 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:49:32.023382 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 15:49:32.023391 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 15:49:32.023400 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 15:49:32.023411 kernel: pcpu-alloc: [0] 0 1 Nov 5 15:49:32.023420 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 5 15:49:32.023431 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:49:32.023440 kernel: random: crng init done Nov 5 15:49:32.023449 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:49:32.023459 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 15:49:32.023470 kernel: Fallback order for Node 0: 0 Nov 5 15:49:32.023479 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 5 15:49:32.023488 kernel: Policy zone: DMA32 Nov 5 15:49:32.023497 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:49:32.023506 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:49:32.023515 kernel: Kernel/User page tables isolation: enabled Nov 5 15:49:32.023525 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:49:32.023534 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:49:32.023545 kernel: Dynamic Preempt: voluntary Nov 5 15:49:32.023554 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:49:32.023583 kernel: rcu: RCU event tracing is enabled. Nov 5 15:49:32.023593 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:49:32.023731 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:49:32.023740 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:49:32.023750 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:49:32.023763 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:49:32.023772 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:49:32.023782 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:49:32.023796 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:49:32.023806 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:49:32.023815 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 15:49:32.023825 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:49:32.023836 kernel: Console: colour VGA+ 80x25 Nov 5 15:49:32.023846 kernel: printk: legacy console [tty0] enabled Nov 5 15:49:32.023855 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:49:32.023864 kernel: ACPI: Core revision 20240827 Nov 5 15:49:32.023874 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:49:32.023892 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:49:32.023904 kernel: x2apic enabled Nov 5 15:49:32.023914 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:49:32.023923 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:49:32.023935 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 5 15:49:32.023951 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Nov 5 15:49:32.023961 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 5 15:49:32.023971 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 5 15:49:32.023984 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:49:32.023993 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:49:32.024003 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:49:32.024013 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 5 15:49:32.024022 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:49:32.024032 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:49:32.024041 kernel: MDS: Mitigation: Clear CPU buffers Nov 5 15:49:32.024054 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 15:49:32.024064 kernel: active return thunk: its_return_thunk Nov 5 15:49:32.024073 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 15:49:32.024095 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:49:32.024105 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:49:32.024115 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:49:32.024124 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:49:32.024137 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 5 15:49:32.024147 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:49:32.024157 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:49:32.024166 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:49:32.024176 kernel: landlock: Up and running. Nov 5 15:49:32.024185 kernel: SELinux: Initializing. Nov 5 15:49:32.024195 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:49:32.024205 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:49:32.024217 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 5 15:49:32.024227 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 5 15:49:32.024237 kernel: signal: max sigframe size: 1776 Nov 5 15:49:32.024246 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:49:32.024263 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:49:32.024273 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:49:32.024283 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 15:49:32.024299 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:49:32.024319 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:49:32.024334 kernel: .... node #0, CPUs: #1 Nov 5 15:49:32.024346 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:49:32.024359 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Nov 5 15:49:32.024374 kernel: Memory: 1989436K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102612K reserved, 0K cma-reserved) Nov 5 15:49:32.024389 kernel: devtmpfs: initialized Nov 5 15:49:32.024402 kernel: x86/mm: Memory block size: 128MB Nov 5 15:49:32.024412 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:49:32.024422 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:49:32.024432 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:49:32.024441 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:49:32.024451 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:49:32.024460 kernel: audit: type=2000 audit(1762357770.239:1): state=initialized audit_enabled=0 res=1 Nov 5 15:49:32.024475 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:49:32.024485 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:49:32.024494 kernel: cpuidle: using governor menu Nov 5 15:49:32.024504 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:49:32.024514 kernel: dca service started, version 1.12.1 Nov 5 15:49:32.024523 kernel: PCI: Using configuration type 1 for base access Nov 5 15:49:32.024533 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:49:32.024545 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:49:32.024555 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:49:32.024564 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:49:32.024593 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:49:32.024603 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:49:32.024613 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:49:32.024623 kernel: ACPI: Interpreter enabled Nov 5 15:49:32.024635 kernel: ACPI: PM: (supports S0 S5) Nov 5 15:49:32.024645 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:49:32.024654 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:49:32.024664 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:49:32.024674 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 5 15:49:32.024684 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:49:32.024988 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:49:32.026207 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 5 15:49:32.026433 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 5 15:49:32.026455 kernel: acpiphp: Slot [3] registered Nov 5 15:49:32.026474 kernel: acpiphp: Slot [4] registered Nov 5 15:49:32.026491 kernel: acpiphp: Slot [5] registered Nov 5 15:49:32.026508 kernel: acpiphp: Slot [6] registered Nov 5 15:49:32.026534 kernel: acpiphp: Slot [7] registered Nov 5 15:49:32.026551 kernel: acpiphp: Slot [8] registered Nov 5 15:49:32.026578 kernel: acpiphp: Slot [9] registered Nov 5 15:49:32.026639 kernel: acpiphp: Slot [10] registered Nov 5 15:49:32.026656 kernel: acpiphp: Slot [11] registered Nov 5 15:49:32.026672 kernel: acpiphp: Slot [12] registered Nov 5 15:49:32.026689 kernel: acpiphp: Slot [13] registered Nov 5 15:49:32.026716 kernel: acpiphp: Slot [14] registered Nov 5 15:49:32.026733 kernel: acpiphp: Slot [15] registered Nov 5 15:49:32.026748 kernel: acpiphp: Slot [16] registered Nov 5 15:49:32.026763 kernel: acpiphp: Slot [17] registered Nov 5 15:49:32.026779 kernel: acpiphp: Slot [18] registered Nov 5 15:49:32.026796 kernel: acpiphp: Slot [19] registered Nov 5 15:49:32.026813 kernel: acpiphp: Slot [20] registered Nov 5 15:49:32.026829 kernel: acpiphp: Slot [21] registered Nov 5 15:49:32.026849 kernel: acpiphp: Slot [22] registered Nov 5 15:49:32.026865 kernel: acpiphp: Slot [23] registered Nov 5 15:49:32.026881 kernel: acpiphp: Slot [24] registered Nov 5 15:49:32.026898 kernel: acpiphp: Slot [25] registered Nov 5 15:49:32.026915 kernel: acpiphp: Slot [26] registered Nov 5 15:49:32.026935 kernel: acpiphp: Slot [27] registered Nov 5 15:49:32.026953 kernel: acpiphp: Slot [28] registered Nov 5 15:49:32.026973 kernel: acpiphp: Slot [29] registered Nov 5 15:49:32.026991 kernel: acpiphp: Slot [30] registered Nov 5 15:49:32.027007 kernel: acpiphp: Slot [31] registered Nov 5 15:49:32.027024 kernel: PCI host bridge to bus 0000:00 Nov 5 15:49:32.027271 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:49:32.027456 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:49:32.028971 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:49:32.029164 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 5 15:49:32.029300 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 5 15:49:32.029422 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:49:32.029619 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:49:32.029775 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:49:32.029932 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 5 15:49:32.030115 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 5 15:49:32.030316 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 5 15:49:32.030453 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 5 15:49:32.033647 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 5 15:49:32.036318 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 5 15:49:32.036580 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 5 15:49:32.036721 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 5 15:49:32.036861 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 5 15:49:32.036993 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 5 15:49:32.037122 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 5 15:49:32.037267 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:49:32.037437 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 5 15:49:32.037708 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 5 15:49:32.037850 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 5 15:49:32.037982 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 5 15:49:32.038120 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:49:32.038266 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:49:32.038403 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 5 15:49:32.038538 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 5 15:49:32.038692 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 5 15:49:32.038837 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:49:32.039003 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 5 15:49:32.039147 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 5 15:49:32.039313 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 5 15:49:32.039463 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:49:32.040263 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 5 15:49:32.041899 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 5 15:49:32.042414 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 5 15:49:32.042732 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:49:32.042882 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 5 15:49:32.043035 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 5 15:49:32.043182 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 5 15:49:32.043330 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:49:32.043473 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 5 15:49:32.044673 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 5 15:49:32.044837 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 5 15:49:32.044992 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 15:49:32.045130 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 5 15:49:32.045278 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 5 15:49:32.045292 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:49:32.045303 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:49:32.045312 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:49:32.045327 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:49:32.045337 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 5 15:49:32.045372 kernel: iommu: Default domain type: Translated Nov 5 15:49:32.045383 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:49:32.045393 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:49:32.045417 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:49:32.045430 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 5 15:49:32.045440 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 5 15:49:32.045612 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 5 15:49:32.045765 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 5 15:49:32.045912 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:49:32.045925 kernel: vgaarb: loaded Nov 5 15:49:32.045935 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:49:32.045945 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:49:32.045955 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:49:32.045965 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:49:32.045979 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:49:32.045989 kernel: pnp: PnP ACPI init Nov 5 15:49:32.045999 kernel: pnp: PnP ACPI: found 4 devices Nov 5 15:49:32.046009 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:49:32.046019 kernel: NET: Registered PF_INET protocol family Nov 5 15:49:32.046029 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:49:32.046039 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 15:49:32.046052 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:49:32.046062 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:49:32.046071 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 15:49:32.046084 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 15:49:32.046094 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:49:32.046108 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:49:32.046118 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:49:32.046131 kernel: NET: Registered PF_XDP protocol family Nov 5 15:49:32.046273 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:49:32.046399 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:49:32.046528 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:49:32.048227 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 5 15:49:32.048379 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 5 15:49:32.048533 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 5 15:49:32.048710 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 5 15:49:32.048725 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 5 15:49:32.048882 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 29727 usecs Nov 5 15:49:32.048897 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:49:32.048908 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 5 15:49:32.048918 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 5 15:49:32.048935 kernel: Initialise system trusted keyrings Nov 5 15:49:32.048945 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 15:49:32.048955 kernel: Key type asymmetric registered Nov 5 15:49:32.048965 kernel: Asymmetric key parser 'x509' registered Nov 5 15:49:32.048974 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:49:32.048984 kernel: io scheduler mq-deadline registered Nov 5 15:49:32.048994 kernel: io scheduler kyber registered Nov 5 15:49:32.049007 kernel: io scheduler bfq registered Nov 5 15:49:32.049017 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:49:32.049027 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 5 15:49:32.049037 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 5 15:49:32.049046 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 5 15:49:32.049056 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:49:32.049066 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:49:32.049075 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:49:32.049088 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:49:32.049097 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:49:32.049250 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 15:49:32.049270 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:49:32.049419 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 15:49:32.049562 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T15:49:30 UTC (1762357770) Nov 5 15:49:32.049746 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 5 15:49:32.049758 kernel: intel_pstate: CPU model not supported Nov 5 15:49:32.049768 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:49:32.049778 kernel: Segment Routing with IPv6 Nov 5 15:49:32.049788 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:49:32.049798 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:49:32.049808 kernel: Key type dns_resolver registered Nov 5 15:49:32.049821 kernel: IPI shorthand broadcast: enabled Nov 5 15:49:32.049831 kernel: sched_clock: Marking stable (1284003228, 162320618)->(1472772929, -26449083) Nov 5 15:49:32.049846 kernel: registered taskstats version 1 Nov 5 15:49:32.049859 kernel: Loading compiled-in X.509 certificates Nov 5 15:49:32.049870 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:49:32.049879 kernel: Demotion targets for Node 0: null Nov 5 15:49:32.049889 kernel: Key type .fscrypt registered Nov 5 15:49:32.049901 kernel: Key type fscrypt-provisioning registered Nov 5 15:49:32.049929 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:49:32.049942 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:49:32.049952 kernel: ima: No architecture policies found Nov 5 15:49:32.049981 kernel: clk: Disabling unused clocks Nov 5 15:49:32.049994 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:49:32.050005 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:49:32.050019 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:49:32.050029 kernel: Run /init as init process Nov 5 15:49:32.050040 kernel: with arguments: Nov 5 15:49:32.050050 kernel: /init Nov 5 15:49:32.050062 kernel: with environment: Nov 5 15:49:32.050072 kernel: HOME=/ Nov 5 15:49:32.050082 kernel: TERM=linux Nov 5 15:49:32.050093 kernel: SCSI subsystem initialized Nov 5 15:49:32.050106 kernel: libata version 3.00 loaded. Nov 5 15:49:32.050274 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 5 15:49:32.050439 kernel: scsi host0: ata_piix Nov 5 15:49:32.050597 kernel: scsi host1: ata_piix Nov 5 15:49:32.050612 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 5 15:49:32.050626 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 5 15:49:32.050636 kernel: ACPI: bus type USB registered Nov 5 15:49:32.050649 kernel: usbcore: registered new interface driver usbfs Nov 5 15:49:32.050659 kernel: usbcore: registered new interface driver hub Nov 5 15:49:32.050670 kernel: usbcore: registered new device driver usb Nov 5 15:49:32.050828 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 5 15:49:32.050973 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 5 15:49:32.051109 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 5 15:49:32.051264 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 5 15:49:32.051430 kernel: hub 1-0:1.0: USB hub found Nov 5 15:49:32.053187 kernel: hub 1-0:1.0: 2 ports detected Nov 5 15:49:32.053467 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 5 15:49:32.053625 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 5 15:49:32.053640 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:49:32.053652 kernel: GPT:16515071 != 125829119 Nov 5 15:49:32.053667 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:49:32.053682 kernel: GPT:16515071 != 125829119 Nov 5 15:49:32.053692 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:49:32.053703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 15:49:32.053845 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 5 15:49:32.053973 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 5 15:49:32.054118 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 5 15:49:32.054270 kernel: scsi host2: Virtio SCSI HBA Nov 5 15:49:32.054284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:49:32.054295 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:49:32.054306 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:49:32.054316 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:49:32.054326 kernel: raid6: avx2x4 gen() 21638 MB/s Nov 5 15:49:32.054336 kernel: raid6: avx2x2 gen() 23740 MB/s Nov 5 15:49:32.054352 kernel: raid6: avx2x1 gen() 19735 MB/s Nov 5 15:49:32.054362 kernel: raid6: using algorithm avx2x2 gen() 23740 MB/s Nov 5 15:49:32.054373 kernel: raid6: .... xor() 20145 MB/s, rmw enabled Nov 5 15:49:32.054388 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:49:32.054402 kernel: xor: automatically using best checksumming function avx Nov 5 15:49:32.054418 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:49:32.054438 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (162) Nov 5 15:49:32.054462 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:49:32.054480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:49:32.054500 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:49:32.054519 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:49:32.054538 kernel: loop: module loaded Nov 5 15:49:32.054558 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:49:32.056621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:49:32.056650 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:49:32.056664 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:49:32.056681 systemd[1]: Detected virtualization kvm. Nov 5 15:49:32.056691 systemd[1]: Detected architecture x86-64. Nov 5 15:49:32.056705 systemd[1]: Running in initrd. Nov 5 15:49:32.056715 systemd[1]: No hostname configured, using default hostname. Nov 5 15:49:32.056730 systemd[1]: Hostname set to . Nov 5 15:49:32.056741 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:49:32.056754 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:49:32.056764 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:49:32.056775 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:49:32.056787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:49:32.056802 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:49:32.056813 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:49:32.056825 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:49:32.056836 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:49:32.056847 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:49:32.056858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:49:32.056874 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:49:32.056885 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:49:32.056896 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:49:32.056906 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:49:32.056917 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:49:32.056928 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:49:32.056939 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:49:32.056952 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:49:32.056962 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:49:32.056973 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:49:32.056984 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:49:32.056995 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:49:32.057005 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:49:32.057019 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:49:32.057030 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:49:32.057040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:49:32.057051 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:49:32.057065 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:49:32.057076 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:49:32.057087 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:49:32.057100 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:49:32.057111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:32.057122 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:49:32.057133 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:49:32.057146 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:49:32.057165 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:49:32.057242 systemd-journald[297]: Collecting audit messages is disabled. Nov 5 15:49:32.057269 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:49:32.057281 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:49:32.057292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:49:32.057303 kernel: Bridge firewalling registered Nov 5 15:49:32.057315 systemd-journald[297]: Journal started Nov 5 15:49:32.057357 systemd-journald[297]: Runtime Journal (/run/log/journal/18a101f5d087454299f01e78eb9415df) is 4.9M, max 39.2M, 34.3M free. Nov 5 15:49:32.052793 systemd-modules-load[299]: Inserted module 'br_netfilter' Nov 5 15:49:32.111010 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:49:32.118034 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:49:32.119956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:32.125073 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:49:32.128756 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:49:32.131209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:49:32.134256 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:49:32.149109 systemd-tmpfiles[320]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:49:32.157875 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:49:32.162762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:49:32.164706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:49:32.165502 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:49:32.169820 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:49:32.197431 dracut-cmdline[337]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:49:32.229692 systemd-resolved[333]: Positive Trust Anchors: Nov 5 15:49:32.229707 systemd-resolved[333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:49:32.229712 systemd-resolved[333]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:49:32.229755 systemd-resolved[333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:49:32.264295 systemd-resolved[333]: Defaulting to hostname 'linux'. Nov 5 15:49:32.266145 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:49:32.267795 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:49:32.348725 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:49:32.365611 kernel: iscsi: registered transport (tcp) Nov 5 15:49:32.392798 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:49:32.392883 kernel: QLogic iSCSI HBA Driver Nov 5 15:49:32.430112 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:49:32.465420 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:49:32.468482 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:49:32.530717 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:49:32.533190 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:49:32.534576 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:49:32.577942 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:49:32.581754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:49:32.613275 systemd-udevd[574]: Using default interface naming scheme 'v257'. Nov 5 15:49:32.626019 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:49:32.629882 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:49:32.662365 dracut-pre-trigger[634]: rd.md=0: removing MD RAID activation Nov 5 15:49:32.675918 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:49:32.680870 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:49:32.705889 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:49:32.709743 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:49:32.745654 systemd-networkd[693]: lo: Link UP Nov 5 15:49:32.745664 systemd-networkd[693]: lo: Gained carrier Nov 5 15:49:32.746427 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:49:32.746995 systemd[1]: Reached target network.target - Network. Nov 5 15:49:32.790634 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:49:32.795329 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:49:32.898403 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 15:49:32.913898 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 15:49:32.928175 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 15:49:32.942021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:49:32.944225 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:49:32.962015 disk-uuid[743]: Primary Header is updated. Nov 5 15:49:32.962015 disk-uuid[743]: Secondary Entries is updated. Nov 5 15:49:32.962015 disk-uuid[743]: Secondary Header is updated. Nov 5 15:49:33.033649 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:49:33.035600 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:49:33.064642 kernel: AES CTR mode by8 optimization enabled Nov 5 15:49:33.098173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:33.098329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:33.106462 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:33.112959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:33.127351 systemd-networkd[693]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 5 15:49:33.127362 systemd-networkd[693]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 5 15:49:33.130737 systemd-networkd[693]: eth0: Link UP Nov 5 15:49:33.131507 systemd-networkd[693]: eth0: Gained carrier Nov 5 15:49:33.131525 systemd-networkd[693]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 5 15:49:33.139883 systemd-networkd[693]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:49:33.139902 systemd-networkd[693]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:49:33.141539 systemd-networkd[693]: eth1: Link UP Nov 5 15:49:33.142379 systemd-networkd[693]: eth1: Gained carrier Nov 5 15:49:33.142396 systemd-networkd[693]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:49:33.153697 systemd-networkd[693]: eth0: DHCPv4 address 24.144.92.23/20, gateway 24.144.80.1 acquired from 169.254.169.253 Nov 5 15:49:33.164679 systemd-networkd[693]: eth1: DHCPv4 address 10.124.0.21/20 acquired from 169.254.169.253 Nov 5 15:49:33.249759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:33.257747 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:49:33.259342 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:49:33.260201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:49:33.261273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:49:33.263541 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:49:33.289984 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:49:34.023853 disk-uuid[744]: Warning: The kernel is still using the old partition table. Nov 5 15:49:34.023853 disk-uuid[744]: The new table will be used at the next reboot or after you Nov 5 15:49:34.023853 disk-uuid[744]: run partprobe(8) or kpartx(8) Nov 5 15:49:34.023853 disk-uuid[744]: The operation has completed successfully. Nov 5 15:49:34.033332 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:49:34.033484 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:49:34.036302 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:49:34.077760 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Nov 5 15:49:34.081279 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:34.081367 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:49:34.085722 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:49:34.085815 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:49:34.094601 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:34.095469 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:49:34.098911 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:49:34.293848 systemd-networkd[693]: eth1: Gained IPv6LL Nov 5 15:49:34.295553 ignition[858]: Ignition 2.22.0 Nov 5 15:49:34.295563 ignition[858]: Stage: fetch-offline Nov 5 15:49:34.295660 ignition[858]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:34.295677 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:34.298977 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:49:34.295826 ignition[858]: parsed url from cmdline: "" Nov 5 15:49:34.295831 ignition[858]: no config URL provided Nov 5 15:49:34.295837 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:49:34.295847 ignition[858]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:49:34.295874 ignition[858]: failed to fetch config: resource requires networking Nov 5 15:49:34.302903 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:49:34.296078 ignition[858]: Ignition finished successfully Nov 5 15:49:34.352700 ignition[865]: Ignition 2.22.0 Nov 5 15:49:34.352712 ignition[865]: Stage: fetch Nov 5 15:49:34.352894 ignition[865]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:34.352905 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:34.353007 ignition[865]: parsed url from cmdline: "" Nov 5 15:49:34.353011 ignition[865]: no config URL provided Nov 5 15:49:34.353017 ignition[865]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:49:34.353025 ignition[865]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:49:34.353064 ignition[865]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 5 15:49:34.369215 ignition[865]: GET result: OK Nov 5 15:49:34.369371 ignition[865]: parsing config with SHA512: ad2a394e6b9f30a80195f1b0ab5a0a947d75422bbe3cf16d00a2525cc54253546f8c2fe6f465afff75bef2e1d520145226919d6d0f3afa7f6539fd51d3378eea Nov 5 15:49:34.378267 unknown[865]: fetched base config from "system" Nov 5 15:49:34.378282 unknown[865]: fetched base config from "system" Nov 5 15:49:34.378721 ignition[865]: fetch: fetch complete Nov 5 15:49:34.378292 unknown[865]: fetched user config from "digitalocean" Nov 5 15:49:34.378729 ignition[865]: fetch: fetch passed Nov 5 15:49:34.381395 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:49:34.378801 ignition[865]: Ignition finished successfully Nov 5 15:49:34.384718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:49:34.438471 ignition[872]: Ignition 2.22.0 Nov 5 15:49:34.439460 ignition[872]: Stage: kargs Nov 5 15:49:34.439734 ignition[872]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:34.439751 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:34.443053 ignition[872]: kargs: kargs passed Nov 5 15:49:34.443724 ignition[872]: Ignition finished successfully Nov 5 15:49:34.445674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:49:34.448535 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:49:34.493109 ignition[878]: Ignition 2.22.0 Nov 5 15:49:34.493131 ignition[878]: Stage: disks Nov 5 15:49:34.493333 ignition[878]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:34.493347 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:34.494300 ignition[878]: disks: disks passed Nov 5 15:49:34.494357 ignition[878]: Ignition finished successfully Nov 5 15:49:34.497151 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:49:34.503371 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:49:34.504333 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:49:34.505242 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:49:34.506140 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:49:34.507054 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:49:34.509498 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:49:34.550606 systemd-fsck[886]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 15:49:34.554288 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:49:34.556162 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:49:34.688617 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:49:34.688681 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:49:34.689824 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:49:34.692896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:49:34.695097 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:49:34.699747 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 5 15:49:34.707788 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 5 15:49:34.710720 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:49:34.711837 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:49:34.724533 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:49:34.733588 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Nov 5 15:49:34.734807 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:49:34.739586 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:34.743666 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:49:34.766274 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:49:34.766387 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:49:34.781917 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:49:34.808457 coreos-metadata[897]: Nov 05 15:49:34.808 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:49:34.816314 coreos-metadata[896]: Nov 05 15:49:34.816 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:49:34.820283 coreos-metadata[897]: Nov 05 15:49:34.820 INFO Fetch successful Nov 5 15:49:34.830333 initrd-setup-root[925]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:49:34.832494 coreos-metadata[897]: Nov 05 15:49:34.831 INFO wrote hostname ci-4487.0.1-5-f7907e7d84 to /sysroot/etc/hostname Nov 5 15:49:34.834608 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:49:34.837841 coreos-metadata[896]: Nov 05 15:49:34.833 INFO Fetch successful Nov 5 15:49:34.842996 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:49:34.843938 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 5 15:49:34.844132 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 5 15:49:34.850520 initrd-setup-root[941]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:49:34.857528 initrd-setup-root[948]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:49:34.934942 systemd-networkd[693]: eth0: Gained IPv6LL Nov 5 15:49:34.978420 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:49:34.980929 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:49:34.982294 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:49:35.006877 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:35.027555 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:49:35.046979 ignition[1017]: INFO : Ignition 2.22.0 Nov 5 15:49:35.047884 ignition[1017]: INFO : Stage: mount Nov 5 15:49:35.047884 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:35.047884 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:35.049556 ignition[1017]: INFO : mount: mount passed Nov 5 15:49:35.049556 ignition[1017]: INFO : Ignition finished successfully Nov 5 15:49:35.050535 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:49:35.052742 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:49:35.063562 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:49:35.078946 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:49:35.103616 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1027) Nov 5 15:49:35.107645 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:49:35.108020 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:49:35.112271 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:49:35.112367 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:49:35.115604 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:49:35.165024 ignition[1043]: INFO : Ignition 2.22.0 Nov 5 15:49:35.165024 ignition[1043]: INFO : Stage: files Nov 5 15:49:35.166422 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:35.166422 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:35.167997 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:49:35.168963 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:49:35.168963 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:49:35.175211 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:49:35.176009 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:49:35.176009 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:49:35.175802 unknown[1043]: wrote ssh authorized keys file for user: core Nov 5 15:49:35.178167 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:49:35.179027 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:49:35.221547 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:49:35.272932 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:49:35.273875 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 15:49:35.273875 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 5 15:49:35.496035 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 5 15:49:35.577265 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 15:49:35.577265 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:49:35.579264 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:49:35.579264 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:49:35.579264 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:49:35.579264 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:49:35.579264 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:49:35.579264 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:49:35.579264 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:49:35.584336 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:49:35.584336 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:49:35.584336 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:49:35.584336 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:49:35.584336 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:49:35.584336 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 5 15:49:36.055966 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 5 15:49:36.350175 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:49:36.350175 ignition[1043]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 5 15:49:36.352184 ignition[1043]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:49:36.352994 ignition[1043]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:49:36.352994 ignition[1043]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 5 15:49:36.352994 ignition[1043]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:49:36.356242 ignition[1043]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:49:36.356242 ignition[1043]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:49:36.356242 ignition[1043]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:49:36.356242 ignition[1043]: INFO : files: files passed Nov 5 15:49:36.356242 ignition[1043]: INFO : Ignition finished successfully Nov 5 15:49:36.356019 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:49:36.359875 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:49:36.362678 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:49:36.373478 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:49:36.373655 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:49:36.384520 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:49:36.384520 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:49:36.386183 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:49:36.387300 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:49:36.388871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:49:36.390367 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:49:36.452097 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:49:36.452295 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:49:36.453503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:49:36.454185 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:49:36.455479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:49:36.456897 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:49:36.486850 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:49:36.489297 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:49:36.508148 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:49:36.508410 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:49:36.509770 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:49:36.510902 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:49:36.512050 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:49:36.512267 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:49:36.514373 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:49:36.514939 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:49:36.515886 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:49:36.516683 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:49:36.517620 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:49:36.518536 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:49:36.519498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:49:36.520452 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:49:36.521361 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:49:36.522379 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:49:36.523282 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:49:36.524210 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:49:36.524502 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:49:36.525714 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:49:36.526783 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:49:36.527647 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:49:36.527782 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:49:36.528676 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:49:36.528855 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:49:36.530309 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:49:36.530463 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:49:36.531561 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:49:36.531712 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:49:36.532440 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 5 15:49:36.532594 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:49:36.535683 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:49:36.536302 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:49:36.536448 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:49:36.539811 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:49:36.541693 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:49:36.542047 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:49:36.543446 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:49:36.543720 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:49:36.546561 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:49:36.546754 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:49:36.558601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:49:36.558756 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:49:36.583287 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:49:36.591448 ignition[1099]: INFO : Ignition 2.22.0 Nov 5 15:49:36.591448 ignition[1099]: INFO : Stage: umount Nov 5 15:49:36.602474 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:49:36.602474 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:49:36.602474 ignition[1099]: INFO : umount: umount passed Nov 5 15:49:36.602474 ignition[1099]: INFO : Ignition finished successfully Nov 5 15:49:36.604041 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:49:36.604164 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:49:36.605465 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:49:36.607798 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:49:36.628450 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:49:36.628541 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:49:36.630148 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:49:36.630227 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:49:36.632069 systemd[1]: Stopped target network.target - Network. Nov 5 15:49:36.633695 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:49:36.633796 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:49:36.634482 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:49:36.635339 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:49:36.635601 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:49:36.636242 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:49:36.637149 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:49:36.637960 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:49:36.638010 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:49:36.638882 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:49:36.638926 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:49:36.639817 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:49:36.639885 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:49:36.640599 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:49:36.640645 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:49:36.641524 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:49:36.642304 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:49:36.643716 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:49:36.643814 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:49:36.644821 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:49:36.644933 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:49:36.654450 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:49:36.655123 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:49:36.657996 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:49:36.658611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:49:36.662366 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:49:36.662969 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:49:36.663014 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:49:36.664972 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:49:36.665992 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:49:36.666067 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:49:36.666559 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:49:36.666613 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:49:36.667039 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:49:36.667075 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:49:36.673462 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:49:36.679606 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:49:36.679868 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:49:36.680996 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:49:36.681067 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:49:36.682745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:49:36.682783 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:49:36.683224 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:49:36.683280 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:49:36.684255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:49:36.684320 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:49:36.688745 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:49:36.688813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:49:36.691328 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:49:36.691789 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:49:36.691854 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:49:36.692357 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:49:36.692438 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:49:36.694969 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:36.695033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:36.715540 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:49:36.716402 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:49:36.721547 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:49:36.722405 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:49:36.724132 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:49:36.726265 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:49:36.751038 systemd[1]: Switching root. Nov 5 15:49:36.808136 systemd-journald[297]: Journal stopped Nov 5 15:49:37.979243 systemd-journald[297]: Received SIGTERM from PID 1 (systemd). Nov 5 15:49:37.979317 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:49:37.979338 kernel: SELinux: policy capability open_perms=1 Nov 5 15:49:37.979360 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:49:37.979372 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:49:37.979384 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:49:37.979397 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:49:37.979410 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:49:37.979427 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:49:37.979440 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:49:37.979456 kernel: audit: type=1403 audit(1762357776.929:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:49:37.979470 systemd[1]: Successfully loaded SELinux policy in 71.709ms. Nov 5 15:49:37.979488 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.439ms. Nov 5 15:49:37.979503 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:49:37.979517 systemd[1]: Detected virtualization kvm. Nov 5 15:49:37.979530 systemd[1]: Detected architecture x86-64. Nov 5 15:49:37.979546 systemd[1]: Detected first boot. Nov 5 15:49:37.979560 systemd[1]: Hostname set to . Nov 5 15:49:37.979582 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:49:37.979596 zram_generator::config[1143]: No configuration found. Nov 5 15:49:37.979614 kernel: Guest personality initialized and is inactive Nov 5 15:49:37.979627 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:49:37.979640 kernel: Initialized host personality Nov 5 15:49:37.979659 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:49:37.979672 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:49:37.979690 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:49:37.979703 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:49:37.979716 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:49:37.979731 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:49:37.979744 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:49:37.979761 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:49:37.979774 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:49:37.979788 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:49:37.979802 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:49:37.979816 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:49:37.979830 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:49:37.979845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:49:37.979861 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:49:37.979875 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:49:37.979888 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:49:37.979907 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:49:37.979923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:49:37.979937 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:49:37.979950 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:49:37.979964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:49:37.979978 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:49:37.979991 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:49:37.980005 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:49:37.980019 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:49:37.980036 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:49:37.980050 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:49:37.980063 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:49:37.980076 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:49:37.980089 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:49:37.980103 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:49:37.980116 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:49:37.980133 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:49:37.980148 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:49:37.980162 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:49:37.980175 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:49:37.980188 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:49:37.980202 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:49:37.980215 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:49:37.980231 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:37.980244 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:49:37.980257 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:49:37.980270 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:49:37.980283 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:49:37.980297 systemd[1]: Reached target machines.target - Containers. Nov 5 15:49:37.980310 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:49:37.980325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:37.980339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:49:37.980352 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:49:37.980366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:49:37.980379 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:49:37.980393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:49:37.980406 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:49:37.980422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:49:37.980436 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:49:37.980449 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:49:37.980462 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:49:37.980475 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:49:37.980488 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:49:37.980506 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:37.980519 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:49:37.980532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:49:37.980546 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:49:37.980563 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:49:37.982623 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:49:37.982642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:49:37.982658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:37.982673 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:49:37.982687 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:49:37.982701 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:49:37.982723 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:49:37.982737 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:49:37.982752 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:49:37.982765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:49:37.982779 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:49:37.982792 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:49:37.982806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:49:37.982823 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:49:37.982837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:49:37.982851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:49:37.982865 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:49:37.982882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:49:37.982895 kernel: ACPI: bus type drm_connector registered Nov 5 15:49:37.982909 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:49:37.982924 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:49:37.982938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:49:37.982951 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:49:37.982966 kernel: fuse: init (API version 7.41) Nov 5 15:49:37.982979 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:49:37.982995 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:49:37.983009 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:49:37.983023 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:49:37.983037 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:49:37.983050 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:49:37.983068 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:49:37.983115 systemd-journald[1220]: Collecting audit messages is disabled. Nov 5 15:49:37.983145 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:49:37.983177 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:49:37.983193 systemd-journald[1220]: Journal started Nov 5 15:49:37.983218 systemd-journald[1220]: Runtime Journal (/run/log/journal/18a101f5d087454299f01e78eb9415df) is 4.9M, max 39.2M, 34.3M free. Nov 5 15:49:37.607367 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:49:37.631425 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 15:49:37.632063 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:49:37.986615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:37.991605 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:49:37.991671 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:49:37.996596 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:49:37.999606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:49:38.003636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:49:38.008777 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:49:38.011605 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:49:38.033840 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:49:38.036082 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:49:38.038294 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:49:38.043165 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:49:38.046475 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:49:38.051879 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:49:38.085145 systemd-journald[1220]: Time spent on flushing to /var/log/journal/18a101f5d087454299f01e78eb9415df is 54.977ms for 996 entries. Nov 5 15:49:38.085145 systemd-journald[1220]: System Journal (/var/log/journal/18a101f5d087454299f01e78eb9415df) is 8M, max 163.5M, 155.5M free. Nov 5 15:49:38.149707 systemd-journald[1220]: Received client request to flush runtime journal. Nov 5 15:49:38.149777 kernel: loop1: detected capacity change from 0 to 219144 Nov 5 15:49:38.149803 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 15:49:38.085169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:49:38.113145 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:49:38.139609 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:49:38.142426 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:49:38.152844 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:49:38.159750 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:49:38.162770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:49:38.172606 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 15:49:38.185860 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:49:38.202535 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Nov 5 15:49:38.202555 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Nov 5 15:49:38.207978 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:49:38.217322 kernel: loop4: detected capacity change from 0 to 8 Nov 5 15:49:38.236596 kernel: loop5: detected capacity change from 0 to 219144 Nov 5 15:49:38.255875 kernel: loop6: detected capacity change from 0 to 110984 Nov 5 15:49:38.258910 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:49:38.272595 kernel: loop7: detected capacity change from 0 to 128048 Nov 5 15:49:38.286597 kernel: loop1: detected capacity change from 0 to 8 Nov 5 15:49:38.287039 (sd-merge)[1290]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Nov 5 15:49:38.291878 (sd-merge)[1290]: Merged extensions into '/usr'. Nov 5 15:49:38.296531 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:49:38.296702 systemd[1]: Reloading... Nov 5 15:49:38.405385 systemd-resolved[1282]: Positive Trust Anchors: Nov 5 15:49:38.405402 systemd-resolved[1282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:49:38.405407 systemd-resolved[1282]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:49:38.405447 systemd-resolved[1282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:49:38.432043 systemd-resolved[1282]: Using system hostname 'ci-4487.0.1-5-f7907e7d84'. Nov 5 15:49:38.450489 zram_generator::config[1326]: No configuration found. Nov 5 15:49:38.668291 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:49:38.669106 systemd[1]: Reloading finished in 371 ms. Nov 5 15:49:38.693655 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:49:38.694940 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:49:38.699067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:49:38.701517 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:49:38.708729 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:49:38.714850 systemd[1]: Starting ensure-sysext.service... Nov 5 15:49:38.721814 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:49:38.743526 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:49:38.749607 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:49:38.756363 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:49:38.756511 systemd[1]: Reloading... Nov 5 15:49:38.784819 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:49:38.784976 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:49:38.785952 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:49:38.786237 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:49:38.787523 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:49:38.787893 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Nov 5 15:49:38.787994 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Nov 5 15:49:38.794311 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:49:38.794324 systemd-tmpfiles[1369]: Skipping /boot Nov 5 15:49:38.809047 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:49:38.809062 systemd-tmpfiles[1369]: Skipping /boot Nov 5 15:49:38.831605 zram_generator::config[1397]: No configuration found. Nov 5 15:49:39.077241 systemd[1]: Reloading finished in 320 ms. Nov 5 15:49:39.101986 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:49:39.115409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:49:39.128400 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:49:39.133021 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:49:39.136636 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:49:39.144429 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:49:39.148997 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:49:39.154241 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:49:39.164817 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:39.165036 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:39.170712 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:49:39.179075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:49:39.186825 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:49:39.187672 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:39.187804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:39.187898 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:39.192303 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:39.192514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:39.193812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:39.193912 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:39.194001 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:39.207726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:39.209383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:39.212948 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:49:39.214273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:39.215764 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:39.215987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:39.236154 systemd[1]: Finished ensure-sysext.service. Nov 5 15:49:39.237989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:49:39.252999 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:49:39.346839 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:49:39.347682 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:49:39.353016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:49:39.360158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:49:39.361417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:49:39.362324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:49:39.367337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:49:39.367498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:49:39.369527 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:49:39.370339 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:49:39.375697 systemd-udevd[1450]: Using default interface naming scheme 'v257'. Nov 5 15:49:39.383040 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:49:39.419067 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:49:39.421303 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:49:39.438091 augenrules[1486]: No rules Nov 5 15:49:39.444058 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:49:39.446912 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:49:39.470750 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:49:39.480042 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:49:39.568345 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 5 15:49:39.572821 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 5 15:49:39.574640 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:39.574873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:49:39.577370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:49:39.584961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:49:39.589440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:49:39.591761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:49:39.591828 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:49:39.591880 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:49:39.591902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:49:39.672925 kernel: ISO 9660 Extensions: RRIP_1991A Nov 5 15:49:39.675043 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:49:39.681371 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 5 15:49:39.683280 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:49:39.702334 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:49:39.723871 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:49:39.728770 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:49:39.739770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:49:39.741799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:49:39.749344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:49:39.749739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:49:39.752856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:49:39.752918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:49:39.830385 systemd-networkd[1499]: lo: Link UP Nov 5 15:49:39.830398 systemd-networkd[1499]: lo: Gained carrier Nov 5 15:49:39.834889 systemd-networkd[1499]: eth0: Configuring with /run/systemd/network/10-0e:5f:b0:62:d0:f8.network. Nov 5 15:49:39.839470 systemd-networkd[1499]: eth0: Link UP Nov 5 15:49:39.839506 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:49:39.839756 systemd-networkd[1499]: eth0: Gained carrier Nov 5 15:49:39.847852 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:39.850851 systemd[1]: Reached target network.target - Network. Nov 5 15:49:39.856009 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:49:39.859108 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:49:39.922511 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:49:39.952102 systemd-networkd[1499]: eth1: Configuring with /run/systemd/network/10-82:a1:34:3d:96:a8.network. Nov 5 15:49:39.955819 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:39.958947 systemd-networkd[1499]: eth1: Link UP Nov 5 15:49:39.959797 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:39.960900 systemd-networkd[1499]: eth1: Gained carrier Nov 5 15:49:39.968042 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:39.969895 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:40.031610 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:49:40.050600 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:49:40.055301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:49:40.061237 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:49:40.071614 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:49:40.141247 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 5 15:49:40.146974 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:49:40.144097 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:49:40.178357 ldconfig[1448]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:49:40.184675 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:49:40.193344 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:49:40.222068 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:49:40.223238 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:49:40.224298 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:49:40.225897 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:49:40.227747 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:49:40.228632 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:49:40.229681 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:49:40.231010 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:49:40.232671 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:49:40.232720 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:49:40.233268 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:49:40.235657 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:49:40.240896 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:49:40.251964 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:49:40.255939 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:49:40.256649 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:49:40.271884 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:49:40.274247 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:49:40.277716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:49:40.279359 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:49:40.281670 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:49:40.282299 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:49:40.282332 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:49:40.323850 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:49:40.351393 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:49:40.357159 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:49:40.362926 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:49:40.370254 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:49:40.376006 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:49:40.377687 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:49:40.384962 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:49:40.391034 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:49:40.398148 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:49:40.409496 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:49:40.416972 coreos-metadata[1559]: Nov 05 15:49:40.416 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:49:40.418558 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:49:40.436914 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:49:40.437515 coreos-metadata[1559]: Nov 05 15:49:40.437 INFO Fetch successful Nov 5 15:49:40.438950 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:49:40.439853 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:49:40.442914 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:49:40.462330 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:49:40.498667 jq[1575]: true Nov 5 15:49:40.499076 jq[1562]: false Nov 5 15:49:40.503187 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:49:40.505879 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:49:40.506845 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:49:40.513167 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:49:40.514674 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:49:40.548740 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Nov 5 15:49:40.554961 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Nov 5 15:49:40.570163 extend-filesystems[1565]: Found /dev/vda6 Nov 5 15:49:40.586788 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Nov 5 15:49:40.586788 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:49:40.586788 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Nov 5 15:49:40.582872 oslogin_cache_refresh[1566]: Failure getting users, quitting Nov 5 15:49:40.582898 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:49:40.582967 oslogin_cache_refresh[1566]: Refreshing group entry cache Nov 5 15:49:40.592858 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Nov 5 15:49:40.592858 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:49:40.588216 oslogin_cache_refresh[1566]: Failure getting groups, quitting Nov 5 15:49:40.588235 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:49:40.600195 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:49:40.602142 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:49:40.605845 extend-filesystems[1565]: Found /dev/vda9 Nov 5 15:49:40.649910 jq[1585]: true Nov 5 15:49:40.622418 dbus-daemon[1560]: [system] SELinux support is enabled Nov 5 15:49:40.621156 (ntainerd)[1604]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:49:40.658851 extend-filesystems[1565]: Checking size of /dev/vda9 Nov 5 15:49:40.641730 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:49:40.652882 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:49:40.653209 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:49:40.669164 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:49:40.669227 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:49:40.672556 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:49:40.672728 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 5 15:49:40.672757 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:49:40.687388 update_engine[1572]: I20251105 15:49:40.683626 1572 main.cc:92] Flatcar Update Engine starting Nov 5 15:49:40.692896 tar[1587]: linux-amd64/LICENSE Nov 5 15:49:40.693254 tar[1587]: linux-amd64/helm Nov 5 15:49:40.710541 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:49:40.711243 update_engine[1572]: I20251105 15:49:40.710989 1572 update_check_scheduler.cc:74] Next update check in 3m20s Nov 5 15:49:40.717025 systemd-logind[1571]: New seat seat0. Nov 5 15:49:40.727432 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:49:40.746275 extend-filesystems[1565]: Resized partition /dev/vda9 Nov 5 15:49:40.750167 extend-filesystems[1620]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:49:40.756610 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Nov 5 15:49:40.774442 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:49:40.776307 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:49:40.787664 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:49:40.811429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:40.918975 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:49:40.929987 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:49:40.936947 systemd[1]: Starting sshkeys.service... Nov 5 15:49:40.953813 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 5 15:49:40.980053 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:49:40.991347 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:49:41.004394 extend-filesystems[1620]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 15:49:41.004394 extend-filesystems[1620]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 5 15:49:41.004394 extend-filesystems[1620]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 5 15:49:41.004091 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:49:41.024136 extend-filesystems[1565]: Resized filesystem in /dev/vda9 Nov 5 15:49:41.005326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:49:41.154670 coreos-metadata[1640]: Nov 05 15:49:41.154 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:49:41.169075 coreos-metadata[1640]: Nov 05 15:49:41.168 INFO Fetch successful Nov 5 15:49:41.181141 unknown[1640]: wrote ssh authorized keys file for user: core Nov 5 15:49:41.229996 update-ssh-keys[1654]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:49:41.231859 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:49:41.286135 systemd-logind[1571]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:49:41.365759 systemd-logind[1571]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:49:41.390593 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:49:41.436091 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:49:41.463990 systemd[1]: Finished sshkeys.service. Nov 5 15:49:41.468004 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:49:41.529368 systemd-networkd[1499]: eth0: Gained IPv6LL Nov 5 15:49:41.531718 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:41.536956 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:49:41.547149 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 5 15:49:41.547281 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 5 15:49:41.575717 kernel: Console: switching to colour dummy device 80x25 Nov 5 15:49:41.588080 containerd[1604]: time="2025-11-05T15:49:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:49:41.590689 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 5 15:49:41.590809 kernel: [drm] features: -context_init Nov 5 15:49:41.593404 systemd-vconsole-setup[1632]: KD_FONT_OP_SET failed, fonts will not be copied to tty5: Function not implemented Nov 5 15:49:41.593493 systemd-vconsole-setup[1632]: KD_FONT_OP_SET failed, fonts will not be copied to tty6: Function not implemented Nov 5 15:49:41.595439 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:49:41.600159 containerd[1604]: time="2025-11-05T15:49:41.600098847Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:49:41.602030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:41.606506 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:49:41.608835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:41.624802 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:49:41.631316 kernel: [drm] number of scanouts: 1 Nov 5 15:49:41.631405 kernel: [drm] number of cap sets: 0 Nov 5 15:49:41.631692 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:49:41.631820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:41.632138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:41.632471 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:41.634737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:41.654542 systemd-networkd[1499]: eth1: Gained IPv6LL Nov 5 15:49:41.655055 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:41.670689 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 5 15:49:41.722835 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:49:41.725349 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:49:41.742247 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:49:41.749874 containerd[1604]: time="2025-11-05T15:49:41.749811750Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.483µs" Nov 5 15:49:41.749874 containerd[1604]: time="2025-11-05T15:49:41.749858887Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:49:41.750048 containerd[1604]: time="2025-11-05T15:49:41.749893994Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:49:41.750161 containerd[1604]: time="2025-11-05T15:49:41.750130548Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:49:41.750219 containerd[1604]: time="2025-11-05T15:49:41.750164880Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:49:41.750219 containerd[1604]: time="2025-11-05T15:49:41.750204028Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:49:41.750368 containerd[1604]: time="2025-11-05T15:49:41.750342046Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:49:41.750368 containerd[1604]: time="2025-11-05T15:49:41.750364422Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:49:41.759187 containerd[1604]: time="2025-11-05T15:49:41.759078416Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:49:41.759187 containerd[1604]: time="2025-11-05T15:49:41.759138121Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:49:41.759187 containerd[1604]: time="2025-11-05T15:49:41.759176756Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:49:41.759187 containerd[1604]: time="2025-11-05T15:49:41.759190803Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:49:41.759442 containerd[1604]: time="2025-11-05T15:49:41.759376868Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:49:41.761148 containerd[1604]: time="2025-11-05T15:49:41.761090443Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:49:41.761404 containerd[1604]: time="2025-11-05T15:49:41.761173633Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:49:41.761404 containerd[1604]: time="2025-11-05T15:49:41.761191551Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:49:41.761404 containerd[1604]: time="2025-11-05T15:49:41.761256874Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:49:41.765885 containerd[1604]: time="2025-11-05T15:49:41.765080222Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:49:41.765885 containerd[1604]: time="2025-11-05T15:49:41.765280065Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.770873237Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.770994870Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771041850Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771062309Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771079806Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771095359Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771120191Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771153718Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771207782Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771233439Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771247221Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771265036Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:49:41.771532 containerd[1604]: time="2025-11-05T15:49:41.771527499Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771561419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771597857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771621317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771653814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771671761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771686833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771699579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771715653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771742516Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771761019Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771867022Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771898195Z" level=info msg="Start snapshots syncer" Nov 5 15:49:41.772097 containerd[1604]: time="2025-11-05T15:49:41.771946231Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:49:41.772552 containerd[1604]: time="2025-11-05T15:49:41.772307664Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:49:41.772552 containerd[1604]: time="2025-11-05T15:49:41.772389084Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:49:41.785755 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 5 15:49:41.786900 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 15:49:41.789550 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.793730696Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794031896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794082031Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794107553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794126403Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794180590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794203284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794221331Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794272646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794297080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794315191Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794369472Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794398041Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:49:41.795139 containerd[1604]: time="2025-11-05T15:49:41.794410147Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794422954Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794434793Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794449651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794469741Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794497585Z" level=info msg="runtime interface created" Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794505203Z" level=info msg="created NRI interface" Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794516661Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794541980Z" level=info msg="Connect containerd service" Nov 5 15:49:41.795711 containerd[1604]: time="2025-11-05T15:49:41.794624388Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:49:41.803836 containerd[1604]: time="2025-11-05T15:49:41.801456944Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:49:41.849592 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 5 15:49:41.866022 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:49:41.870707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:41.941514 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:49:41.951536 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:49:41.953236 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:49:41.953469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:49:41.953937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:41.955173 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:41.965901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:49:42.042683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:49:42.169751 containerd[1604]: time="2025-11-05T15:49:42.169244153Z" level=info msg="Start subscribing containerd event" Nov 5 15:49:42.169751 containerd[1604]: time="2025-11-05T15:49:42.169336406Z" level=info msg="Start recovering state" Nov 5 15:49:42.169751 containerd[1604]: time="2025-11-05T15:49:42.169545284Z" level=info msg="Start event monitor" Nov 5 15:49:42.169751 containerd[1604]: time="2025-11-05T15:49:42.169667161Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:49:42.169751 containerd[1604]: time="2025-11-05T15:49:42.169681178Z" level=info msg="Start streaming server" Nov 5 15:49:42.169751 containerd[1604]: time="2025-11-05T15:49:42.169702513Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:49:42.169751 containerd[1604]: time="2025-11-05T15:49:42.169713741Z" level=info msg="runtime interface starting up..." Nov 5 15:49:42.169751 containerd[1604]: time="2025-11-05T15:49:42.169722528Z" level=info msg="starting plugins..." Nov 5 15:49:42.171132 containerd[1604]: time="2025-11-05T15:49:42.170866700Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:49:42.171132 containerd[1604]: time="2025-11-05T15:49:42.171082664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:49:42.171302 containerd[1604]: time="2025-11-05T15:49:42.171193151Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:49:42.171473 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:49:42.175797 containerd[1604]: time="2025-11-05T15:49:42.174825797Z" level=info msg="containerd successfully booted in 0.587356s" Nov 5 15:49:42.358397 tar[1587]: linux-amd64/README.md Nov 5 15:49:42.383854 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:49:43.132286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:43.135831 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:49:43.137527 systemd[1]: Startup finished in 2.364s (kernel) + 5.268s (initrd) + 6.278s (userspace) = 13.910s. Nov 5 15:49:43.144166 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:49:43.399381 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:49:43.401862 systemd[1]: Started sshd@0-24.144.92.23:22-139.178.68.195:45060.service - OpenSSH per-connection server daemon (139.178.68.195:45060). Nov 5 15:49:43.525281 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 45060 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:43.529238 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:43.540810 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:49:43.543278 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:49:43.558693 systemd-logind[1571]: New session 1 of user core. Nov 5 15:49:43.571949 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:49:43.580039 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:49:43.595416 (systemd)[1747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:49:43.600175 systemd-logind[1571]: New session c1 of user core. Nov 5 15:49:43.771309 systemd[1747]: Queued start job for default target default.target. Nov 5 15:49:43.779080 systemd[1747]: Created slice app.slice - User Application Slice. Nov 5 15:49:43.779131 systemd[1747]: Reached target paths.target - Paths. Nov 5 15:49:43.779925 systemd[1747]: Reached target timers.target - Timers. Nov 5 15:49:43.782524 systemd[1747]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:49:43.789124 kubelet[1732]: E1105 15:49:43.789058 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:49:43.792056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:49:43.792235 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:49:43.794221 systemd[1]: kubelet.service: Consumed 1.188s CPU time, 257.2M memory peak. Nov 5 15:49:43.803968 systemd[1747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:49:43.804155 systemd[1747]: Reached target sockets.target - Sockets. Nov 5 15:49:43.804225 systemd[1747]: Reached target basic.target - Basic System. Nov 5 15:49:43.804270 systemd[1747]: Reached target default.target - Main User Target. Nov 5 15:49:43.804303 systemd[1747]: Startup finished in 193ms. Nov 5 15:49:43.805173 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:49:43.813198 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:49:43.880963 systemd[1]: Started sshd@1-24.144.92.23:22-139.178.68.195:45072.service - OpenSSH per-connection server daemon (139.178.68.195:45072). Nov 5 15:49:43.955507 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 45072 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:43.957466 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:43.963647 systemd-logind[1571]: New session 2 of user core. Nov 5 15:49:43.972923 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:49:44.040441 sshd[1763]: Connection closed by 139.178.68.195 port 45072 Nov 5 15:49:44.041422 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:44.053552 systemd[1]: sshd@1-24.144.92.23:22-139.178.68.195:45072.service: Deactivated successfully. Nov 5 15:49:44.055782 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:49:44.056874 systemd-logind[1571]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:49:44.060605 systemd[1]: Started sshd@2-24.144.92.23:22-139.178.68.195:45078.service - OpenSSH per-connection server daemon (139.178.68.195:45078). Nov 5 15:49:44.061659 systemd-logind[1571]: Removed session 2. Nov 5 15:49:44.120289 sshd[1769]: Accepted publickey for core from 139.178.68.195 port 45078 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:44.121745 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:44.129667 systemd-logind[1571]: New session 3 of user core. Nov 5 15:49:44.135905 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:49:44.193087 sshd[1772]: Connection closed by 139.178.68.195 port 45078 Nov 5 15:49:44.193832 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:44.209493 systemd[1]: sshd@2-24.144.92.23:22-139.178.68.195:45078.service: Deactivated successfully. Nov 5 15:49:44.212248 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:49:44.215068 systemd-logind[1571]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:49:44.217739 systemd-logind[1571]: Removed session 3. Nov 5 15:49:44.220919 systemd[1]: Started sshd@3-24.144.92.23:22-139.178.68.195:45084.service - OpenSSH per-connection server daemon (139.178.68.195:45084). Nov 5 15:49:44.289592 sshd[1778]: Accepted publickey for core from 139.178.68.195 port 45084 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:44.291288 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:44.298250 systemd-logind[1571]: New session 4 of user core. Nov 5 15:49:44.300857 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:49:44.365528 sshd[1781]: Connection closed by 139.178.68.195 port 45084 Nov 5 15:49:44.366117 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:44.379246 systemd[1]: sshd@3-24.144.92.23:22-139.178.68.195:45084.service: Deactivated successfully. Nov 5 15:49:44.381437 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:49:44.383087 systemd-logind[1571]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:49:44.385915 systemd[1]: Started sshd@4-24.144.92.23:22-139.178.68.195:45086.service - OpenSSH per-connection server daemon (139.178.68.195:45086). Nov 5 15:49:44.387859 systemd-logind[1571]: Removed session 4. Nov 5 15:49:44.447333 sshd[1787]: Accepted publickey for core from 139.178.68.195 port 45086 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:44.448753 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:44.454924 systemd-logind[1571]: New session 5 of user core. Nov 5 15:49:44.468033 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:49:44.540107 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:49:44.540432 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:49:44.552959 sudo[1791]: pam_unix(sudo:session): session closed for user root Nov 5 15:49:44.556548 sshd[1790]: Connection closed by 139.178.68.195 port 45086 Nov 5 15:49:44.557281 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:44.574098 systemd[1]: sshd@4-24.144.92.23:22-139.178.68.195:45086.service: Deactivated successfully. Nov 5 15:49:44.576358 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:49:44.577189 systemd-logind[1571]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:49:44.581003 systemd[1]: Started sshd@5-24.144.92.23:22-139.178.68.195:45092.service - OpenSSH per-connection server daemon (139.178.68.195:45092). Nov 5 15:49:44.582134 systemd-logind[1571]: Removed session 5. Nov 5 15:49:44.637383 sshd[1797]: Accepted publickey for core from 139.178.68.195 port 45092 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:44.638947 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:44.644494 systemd-logind[1571]: New session 6 of user core. Nov 5 15:49:44.651907 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:49:44.712761 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:49:44.713118 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:49:44.719003 sudo[1802]: pam_unix(sudo:session): session closed for user root Nov 5 15:49:44.727330 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:49:44.728080 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:49:44.742476 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:49:44.797277 augenrules[1824]: No rules Nov 5 15:49:44.798051 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:49:44.798268 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:49:44.799401 sudo[1801]: pam_unix(sudo:session): session closed for user root Nov 5 15:49:44.804700 sshd[1800]: Connection closed by 139.178.68.195 port 45092 Nov 5 15:49:44.805244 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Nov 5 15:49:44.823496 systemd[1]: sshd@5-24.144.92.23:22-139.178.68.195:45092.service: Deactivated successfully. Nov 5 15:49:44.826472 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:49:44.828050 systemd-logind[1571]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:49:44.830679 systemd-logind[1571]: Removed session 6. Nov 5 15:49:44.832533 systemd[1]: Started sshd@6-24.144.92.23:22-139.178.68.195:45096.service - OpenSSH per-connection server daemon (139.178.68.195:45096). Nov 5 15:49:44.898510 sshd[1834]: Accepted publickey for core from 139.178.68.195 port 45096 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:49:44.900150 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:49:44.906273 systemd-logind[1571]: New session 7 of user core. Nov 5 15:49:44.913963 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:49:44.976769 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:49:44.977181 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:49:45.501448 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:49:45.519113 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:49:45.869033 dockerd[1855]: time="2025-11-05T15:49:45.868487914Z" level=info msg="Starting up" Nov 5 15:49:45.870407 dockerd[1855]: time="2025-11-05T15:49:45.870149669Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:49:45.886958 dockerd[1855]: time="2025-11-05T15:49:45.886905014Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:49:46.003941 dockerd[1855]: time="2025-11-05T15:49:46.003869454Z" level=info msg="Loading containers: start." Nov 5 15:49:46.018622 kernel: Initializing XFRM netlink socket Nov 5 15:49:46.260434 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:46.261944 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:46.276260 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:46.317329 systemd-networkd[1499]: docker0: Link UP Nov 5 15:49:46.318059 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Nov 5 15:49:46.320337 dockerd[1855]: time="2025-11-05T15:49:46.320291950Z" level=info msg="Loading containers: done." Nov 5 15:49:46.336327 dockerd[1855]: time="2025-11-05T15:49:46.336212243Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:49:46.336327 dockerd[1855]: time="2025-11-05T15:49:46.336323150Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:49:46.336527 dockerd[1855]: time="2025-11-05T15:49:46.336410903Z" level=info msg="Initializing buildkit" Nov 5 15:49:46.339354 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2680021291-merged.mount: Deactivated successfully. Nov 5 15:49:46.360006 dockerd[1855]: time="2025-11-05T15:49:46.359962060Z" level=info msg="Completed buildkit initialization" Nov 5 15:49:46.369594 dockerd[1855]: time="2025-11-05T15:49:46.369364106Z" level=info msg="Daemon has completed initialization" Nov 5 15:49:46.369829 dockerd[1855]: time="2025-11-05T15:49:46.369792635Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:49:46.369853 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:49:47.091592 containerd[1604]: time="2025-11-05T15:49:47.091518377Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 5 15:49:47.759530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121908992.mount: Deactivated successfully. Nov 5 15:49:48.811563 containerd[1604]: time="2025-11-05T15:49:48.811496919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:48.812484 containerd[1604]: time="2025-11-05T15:49:48.812449861Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 5 15:49:48.814417 containerd[1604]: time="2025-11-05T15:49:48.812872702Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:48.815371 containerd[1604]: time="2025-11-05T15:49:48.815341456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:48.816474 containerd[1604]: time="2025-11-05T15:49:48.816443026Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.724876389s" Nov 5 15:49:48.816591 containerd[1604]: time="2025-11-05T15:49:48.816562679Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 5 15:49:48.817561 containerd[1604]: time="2025-11-05T15:49:48.817537591Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 5 15:49:50.079708 containerd[1604]: time="2025-11-05T15:49:50.079625387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:50.081005 containerd[1604]: time="2025-11-05T15:49:50.080951900Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 5 15:49:50.081805 containerd[1604]: time="2025-11-05T15:49:50.081750137Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:50.084559 containerd[1604]: time="2025-11-05T15:49:50.083981452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:50.085145 containerd[1604]: time="2025-11-05T15:49:50.085104308Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.267536525s" Nov 5 15:49:50.085145 containerd[1604]: time="2025-11-05T15:49:50.085143050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 5 15:49:50.085805 containerd[1604]: time="2025-11-05T15:49:50.085764824Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 5 15:49:51.159315 containerd[1604]: time="2025-11-05T15:49:51.159244016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:51.160251 containerd[1604]: time="2025-11-05T15:49:51.160206843Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 5 15:49:51.161051 containerd[1604]: time="2025-11-05T15:49:51.161022372Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:51.163896 containerd[1604]: time="2025-11-05T15:49:51.163855992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:51.164904 containerd[1604]: time="2025-11-05T15:49:51.164870282Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.07846168s" Nov 5 15:49:51.165033 containerd[1604]: time="2025-11-05T15:49:51.165018949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 5 15:49:51.165619 containerd[1604]: time="2025-11-05T15:49:51.165510184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 5 15:49:52.261008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777138183.mount: Deactivated successfully. Nov 5 15:49:52.657297 containerd[1604]: time="2025-11-05T15:49:52.657138629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:52.658640 containerd[1604]: time="2025-11-05T15:49:52.658540344Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 5 15:49:52.659664 containerd[1604]: time="2025-11-05T15:49:52.659615165Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:52.663898 containerd[1604]: time="2025-11-05T15:49:52.663313227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:52.665029 containerd[1604]: time="2025-11-05T15:49:52.664989186Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.499305853s" Nov 5 15:49:52.665352 containerd[1604]: time="2025-11-05T15:49:52.665311188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 5 15:49:52.665945 containerd[1604]: time="2025-11-05T15:49:52.665923349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 5 15:49:53.165165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238642162.mount: Deactivated successfully. Nov 5 15:49:53.266478 systemd-resolved[1282]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 5 15:49:54.042767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:49:54.046278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:49:54.211142 containerd[1604]: time="2025-11-05T15:49:54.211076121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:54.213705 containerd[1604]: time="2025-11-05T15:49:54.213634645Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 5 15:49:54.216240 containerd[1604]: time="2025-11-05T15:49:54.216176461Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:54.220216 containerd[1604]: time="2025-11-05T15:49:54.220138709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:54.221280 containerd[1604]: time="2025-11-05T15:49:54.221134384Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.554470152s" Nov 5 15:49:54.221481 containerd[1604]: time="2025-11-05T15:49:54.221459906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 5 15:49:54.224893 containerd[1604]: time="2025-11-05T15:49:54.224663902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 5 15:49:54.253747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:49:54.264890 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:49:54.329603 kubelet[2207]: E1105 15:49:54.328710 2207 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:49:54.334261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:49:54.334526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:49:54.335761 systemd[1]: kubelet.service: Consumed 230ms CPU time, 110M memory peak. Nov 5 15:49:54.709423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689719240.mount: Deactivated successfully. Nov 5 15:49:54.714613 containerd[1604]: time="2025-11-05T15:49:54.714534618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:54.715322 containerd[1604]: time="2025-11-05T15:49:54.715257752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 5 15:49:54.717268 containerd[1604]: time="2025-11-05T15:49:54.715829718Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:54.718526 containerd[1604]: time="2025-11-05T15:49:54.718488583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:54.720102 containerd[1604]: time="2025-11-05T15:49:54.720068628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 495.370094ms" Nov 5 15:49:54.720102 containerd[1604]: time="2025-11-05T15:49:54.720102438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 5 15:49:54.720660 containerd[1604]: time="2025-11-05T15:49:54.720603220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 5 15:49:56.373820 systemd-resolved[1282]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 5 15:49:57.589306 containerd[1604]: time="2025-11-05T15:49:57.588104361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:57.589306 containerd[1604]: time="2025-11-05T15:49:57.589238781Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 5 15:49:57.590627 containerd[1604]: time="2025-11-05T15:49:57.590584029Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:57.594951 containerd[1604]: time="2025-11-05T15:49:57.594898672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:49:57.596473 containerd[1604]: time="2025-11-05T15:49:57.596427595Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.875785862s" Nov 5 15:49:57.596668 containerd[1604]: time="2025-11-05T15:49:57.596644092Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 5 15:50:03.620144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:03.620632 systemd[1]: kubelet.service: Consumed 230ms CPU time, 110M memory peak. Nov 5 15:50:03.623691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:03.672597 systemd[1]: Reload requested from client PID 2288 ('systemctl') (unit session-7.scope)... Nov 5 15:50:03.672626 systemd[1]: Reloading... Nov 5 15:50:03.872606 zram_generator::config[2335]: No configuration found. Nov 5 15:50:04.152660 systemd[1]: Reloading finished in 479 ms. Nov 5 15:50:04.261757 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:50:04.262165 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:50:04.262684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:04.262833 systemd[1]: kubelet.service: Consumed 145ms CPU time, 98M memory peak. Nov 5 15:50:04.265947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:04.445367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:04.462493 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:50:04.534057 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:50:04.535133 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:04.535133 kubelet[2387]: I1105 15:50:04.534740 2387 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:50:05.067532 kubelet[2387]: I1105 15:50:05.067444 2387 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 15:50:05.067532 kubelet[2387]: I1105 15:50:05.067496 2387 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:50:05.070390 kubelet[2387]: I1105 15:50:05.070303 2387 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 15:50:05.071693 kubelet[2387]: I1105 15:50:05.071622 2387 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:50:05.072114 kubelet[2387]: I1105 15:50:05.072074 2387 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:50:05.088651 kubelet[2387]: E1105 15:50:05.088510 2387 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://24.144.92.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:50:05.091656 kubelet[2387]: I1105 15:50:05.090292 2387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:50:05.098412 kubelet[2387]: I1105 15:50:05.098384 2387 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:50:05.107341 kubelet[2387]: I1105 15:50:05.107295 2387 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 15:50:05.108556 kubelet[2387]: I1105 15:50:05.108490 2387 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:50:05.110505 kubelet[2387]: I1105 15:50:05.108750 2387 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-5-f7907e7d84","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:50:05.110867 kubelet[2387]: I1105 15:50:05.110841 2387 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:50:05.110957 kubelet[2387]: I1105 15:50:05.110947 2387 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 15:50:05.111322 kubelet[2387]: I1105 15:50:05.111297 2387 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 15:50:05.113821 kubelet[2387]: I1105 15:50:05.113783 2387 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:05.114268 kubelet[2387]: I1105 15:50:05.114238 2387 kubelet.go:475] "Attempting to sync node with API server" Nov 5 15:50:05.114403 kubelet[2387]: I1105 15:50:05.114388 2387 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:50:05.114496 kubelet[2387]: I1105 15:50:05.114483 2387 kubelet.go:387] "Adding apiserver pod source" Nov 5 15:50:05.114624 kubelet[2387]: I1105 15:50:05.114592 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:50:05.119545 kubelet[2387]: E1105 15:50:05.119484 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://24.144.92.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-5-f7907e7d84&limit=500&resourceVersion=0\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:50:05.120077 kubelet[2387]: E1105 15:50:05.120046 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://24.144.92.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:50:05.121051 kubelet[2387]: I1105 15:50:05.121028 2387 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:50:05.127041 kubelet[2387]: I1105 15:50:05.126979 2387 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:50:05.127470 kubelet[2387]: I1105 15:50:05.127440 2387 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 15:50:05.127734 kubelet[2387]: W1105 15:50:05.127704 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:50:05.136040 kubelet[2387]: I1105 15:50:05.136001 2387 server.go:1262] "Started kubelet" Nov 5 15:50:05.140341 kubelet[2387]: I1105 15:50:05.140036 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:50:05.148467 kubelet[2387]: E1105 15:50:05.143925 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.144.92.23:6443/api/v1/namespaces/default/events\": dial tcp 24.144.92.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.1-5-f7907e7d84.187527161e43a9b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-5-f7907e7d84,UID:ci-4487.0.1-5-f7907e7d84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-5-f7907e7d84,},FirstTimestamp:2025-11-05 15:50:05.13593183 +0000 UTC m=+0.668107631,LastTimestamp:2025-11-05 15:50:05.13593183 +0000 UTC m=+0.668107631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-5-f7907e7d84,}" Nov 5 15:50:05.151747 kubelet[2387]: I1105 15:50:05.148557 2387 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:50:05.157642 kubelet[2387]: I1105 15:50:05.156223 2387 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 15:50:05.157642 kubelet[2387]: E1105 15:50:05.156617 2387 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" Nov 5 15:50:05.157642 kubelet[2387]: I1105 15:50:05.157124 2387 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 15:50:05.157642 kubelet[2387]: I1105 15:50:05.157212 2387 reconciler.go:29] "Reconciler: start to sync state" Nov 5 15:50:05.161738 kubelet[2387]: E1105 15:50:05.161671 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://24.144.92.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:50:05.161942 kubelet[2387]: E1105 15:50:05.161830 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.92.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-5-f7907e7d84?timeout=10s\": dial tcp 24.144.92.23:6443: connect: connection refused" interval="200ms" Nov 5 15:50:05.164609 kubelet[2387]: I1105 15:50:05.163867 2387 server.go:310] "Adding debug handlers to kubelet server" Nov 5 15:50:05.169626 kubelet[2387]: I1105 15:50:05.168117 2387 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:50:05.171490 kubelet[2387]: I1105 15:50:05.171425 2387 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 15:50:05.172102 kubelet[2387]: I1105 15:50:05.172045 2387 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:50:05.173670 kubelet[2387]: I1105 15:50:05.168610 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:50:05.176803 kubelet[2387]: I1105 15:50:05.176766 2387 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:50:05.177138 kubelet[2387]: I1105 15:50:05.177117 2387 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:50:05.177488 kubelet[2387]: E1105 15:50:05.177461 2387 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:50:05.178979 kubelet[2387]: I1105 15:50:05.177930 2387 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:50:05.204904 kubelet[2387]: I1105 15:50:05.204785 2387 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 15:50:05.209619 kubelet[2387]: I1105 15:50:05.209392 2387 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:50:05.209619 kubelet[2387]: I1105 15:50:05.209412 2387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:50:05.209619 kubelet[2387]: I1105 15:50:05.209433 2387 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:05.212099 kubelet[2387]: I1105 15:50:05.212050 2387 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 15:50:05.212490 kubelet[2387]: I1105 15:50:05.212443 2387 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 15:50:05.213301 kubelet[2387]: I1105 15:50:05.212888 2387 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 15:50:05.213301 kubelet[2387]: E1105 15:50:05.212965 2387 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:50:05.213703 kubelet[2387]: I1105 15:50:05.212399 2387 policy_none.go:49] "None policy: Start" Nov 5 15:50:05.213794 kubelet[2387]: I1105 15:50:05.213717 2387 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 15:50:05.213794 kubelet[2387]: I1105 15:50:05.213745 2387 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 15:50:05.214340 kubelet[2387]: E1105 15:50:05.214307 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://24.144.92.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:50:05.215466 kubelet[2387]: I1105 15:50:05.215423 2387 policy_none.go:47] "Start" Nov 5 15:50:05.228165 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:50:05.247750 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:50:05.256753 kubelet[2387]: E1105 15:50:05.256711 2387 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" Nov 5 15:50:05.258208 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:50:05.280422 kubelet[2387]: E1105 15:50:05.280030 2387 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:50:05.282245 kubelet[2387]: I1105 15:50:05.281758 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:50:05.282245 kubelet[2387]: I1105 15:50:05.281782 2387 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:50:05.282245 kubelet[2387]: I1105 15:50:05.282202 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:50:05.286174 kubelet[2387]: E1105 15:50:05.286091 2387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:50:05.286488 kubelet[2387]: E1105 15:50:05.286438 2387 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.1-5-f7907e7d84\" not found" Nov 5 15:50:05.332822 systemd[1]: Created slice kubepods-burstable-poda94850eb7e513c85f068357384d52457.slice - libcontainer container kubepods-burstable-poda94850eb7e513c85f068357384d52457.slice. Nov 5 15:50:05.351748 kubelet[2387]: E1105 15:50:05.351708 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.358623 kubelet[2387]: I1105 15:50:05.357658 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cfb6bdcc7e72026a53aa332d7e7e4e6-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-5-f7907e7d84\" (UID: \"0cfb6bdcc7e72026a53aa332d7e7e4e6\") " pod="kube-system/kube-scheduler-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.358623 kubelet[2387]: I1105 15:50:05.357699 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.358623 kubelet[2387]: I1105 15:50:05.357720 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.358623 kubelet[2387]: I1105 15:50:05.357735 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75d89b65ddb7549acf773fbeb3e20e4d-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-5-f7907e7d84\" (UID: \"75d89b65ddb7549acf773fbeb3e20e4d\") " pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.358623 kubelet[2387]: I1105 15:50:05.357763 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75d89b65ddb7549acf773fbeb3e20e4d-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-5-f7907e7d84\" (UID: \"75d89b65ddb7549acf773fbeb3e20e4d\") " pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.359454 kubelet[2387]: I1105 15:50:05.357778 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75d89b65ddb7549acf773fbeb3e20e4d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-5-f7907e7d84\" (UID: \"75d89b65ddb7549acf773fbeb3e20e4d\") " pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.359454 kubelet[2387]: I1105 15:50:05.357793 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.359454 kubelet[2387]: I1105 15:50:05.357807 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.359454 kubelet[2387]: I1105 15:50:05.357822 2387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.358712 systemd[1]: Created slice kubepods-burstable-pod0cfb6bdcc7e72026a53aa332d7e7e4e6.slice - libcontainer container kubepods-burstable-pod0cfb6bdcc7e72026a53aa332d7e7e4e6.slice. Nov 5 15:50:05.363040 kubelet[2387]: E1105 15:50:05.362988 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.92.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-5-f7907e7d84?timeout=10s\": dial tcp 24.144.92.23:6443: connect: connection refused" interval="400ms" Nov 5 15:50:05.364762 kubelet[2387]: E1105 15:50:05.364726 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.366555 systemd[1]: Created slice kubepods-burstable-pod75d89b65ddb7549acf773fbeb3e20e4d.slice - libcontainer container kubepods-burstable-pod75d89b65ddb7549acf773fbeb3e20e4d.slice. Nov 5 15:50:05.368753 kubelet[2387]: E1105 15:50:05.368716 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.383463 kubelet[2387]: I1105 15:50:05.383401 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.384068 kubelet[2387]: E1105 15:50:05.384017 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.144.92.23:6443/api/v1/nodes\": dial tcp 24.144.92.23:6443: connect: connection refused" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.535660 kubelet[2387]: E1105 15:50:05.535460 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://24.144.92.23:6443/api/v1/namespaces/default/events\": dial tcp 24.144.92.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.1-5-f7907e7d84.187527161e43a9b6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-5-f7907e7d84,UID:ci-4487.0.1-5-f7907e7d84,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-5-f7907e7d84,},FirstTimestamp:2025-11-05 15:50:05.13593183 +0000 UTC m=+0.668107631,LastTimestamp:2025-11-05 15:50:05.13593183 +0000 UTC m=+0.668107631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-5-f7907e7d84,}" Nov 5 15:50:05.585746 kubelet[2387]: I1105 15:50:05.585049 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.585746 kubelet[2387]: E1105 15:50:05.585516 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.144.92.23:6443/api/v1/nodes\": dial tcp 24.144.92.23:6443: connect: connection refused" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.654936 kubelet[2387]: E1105 15:50:05.654786 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:05.655923 containerd[1604]: time="2025-11-05T15:50:05.655885250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-5-f7907e7d84,Uid:a94850eb7e513c85f068357384d52457,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:05.665889 systemd-resolved[1282]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 5 15:50:05.668333 kubelet[2387]: E1105 15:50:05.667940 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:05.673884 kubelet[2387]: E1105 15:50:05.673842 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:05.675416 containerd[1604]: time="2025-11-05T15:50:05.675357537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-5-f7907e7d84,Uid:0cfb6bdcc7e72026a53aa332d7e7e4e6,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:05.680592 containerd[1604]: time="2025-11-05T15:50:05.678952500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-5-f7907e7d84,Uid:75d89b65ddb7549acf773fbeb3e20e4d,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:05.764389 kubelet[2387]: E1105 15:50:05.764340 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.92.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-5-f7907e7d84?timeout=10s\": dial tcp 24.144.92.23:6443: connect: connection refused" interval="800ms" Nov 5 15:50:05.988489 kubelet[2387]: I1105 15:50:05.988083 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:05.989216 kubelet[2387]: E1105 15:50:05.989166 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.144.92.23:6443/api/v1/nodes\": dial tcp 24.144.92.23:6443: connect: connection refused" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:06.261435 containerd[1604]: time="2025-11-05T15:50:06.261183453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:06.261899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount56361468.mount: Deactivated successfully. Nov 5 15:50:06.262907 containerd[1604]: time="2025-11-05T15:50:06.262856790Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:50:06.265220 containerd[1604]: time="2025-11-05T15:50:06.265147898Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:06.268444 containerd[1604]: time="2025-11-05T15:50:06.268360394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:06.269397 containerd[1604]: time="2025-11-05T15:50:06.269331900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 15:50:06.269697 containerd[1604]: time="2025-11-05T15:50:06.269593389Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:06.270468 containerd[1604]: time="2025-11-05T15:50:06.270426257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:06.270962 containerd[1604]: time="2025-11-05T15:50:06.270929016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 15:50:06.271751 containerd[1604]: time="2025-11-05T15:50:06.271719088Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 606.992041ms" Nov 5 15:50:06.274611 containerd[1604]: time="2025-11-05T15:50:06.274492235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.255906ms" Nov 5 15:50:06.285615 containerd[1604]: time="2025-11-05T15:50:06.285402676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 603.213862ms" Nov 5 15:50:06.328424 kubelet[2387]: E1105 15:50:06.328335 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://24.144.92.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-5-f7907e7d84&limit=500&resourceVersion=0\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:50:06.330181 kubelet[2387]: E1105 15:50:06.330105 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://24.144.92.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:50:06.391947 containerd[1604]: time="2025-11-05T15:50:06.391811121Z" level=info msg="connecting to shim 47356c263934923f19b4a0342807cdcd24b922701f58dc2a0df0fbb896d5bf64" address="unix:///run/containerd/s/a4ff1c1c84a76edd2f86b0e2a57f0900f029affe0b3c72b4ec525f366bf5c1fc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:06.400380 containerd[1604]: time="2025-11-05T15:50:06.399773150Z" level=info msg="connecting to shim a344fae0ada0b784c851841d522a6943e41621c81ab69d4bd7f7ba6d5c88ca03" address="unix:///run/containerd/s/3298895fd54fcf28743d9e1281dcc3b8421fccaae4918d540b5d1ec79e745efa" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:06.401216 containerd[1604]: time="2025-11-05T15:50:06.401173534Z" level=info msg="connecting to shim 8bd061d736dd5357abcc0422c2560cc422855f9b0f199dccd1f008ebaa968040" address="unix:///run/containerd/s/ed9eb8903f80aa027ea154b5c625c998d96b26e9c856a8bfd671811932c8b7ff" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:06.512476 systemd[1]: Started cri-containerd-47356c263934923f19b4a0342807cdcd24b922701f58dc2a0df0fbb896d5bf64.scope - libcontainer container 47356c263934923f19b4a0342807cdcd24b922701f58dc2a0df0fbb896d5bf64. Nov 5 15:50:06.515375 systemd[1]: Started cri-containerd-8bd061d736dd5357abcc0422c2560cc422855f9b0f199dccd1f008ebaa968040.scope - libcontainer container 8bd061d736dd5357abcc0422c2560cc422855f9b0f199dccd1f008ebaa968040. Nov 5 15:50:06.518510 systemd[1]: Started cri-containerd-a344fae0ada0b784c851841d522a6943e41621c81ab69d4bd7f7ba6d5c88ca03.scope - libcontainer container a344fae0ada0b784c851841d522a6943e41621c81ab69d4bd7f7ba6d5c88ca03. Nov 5 15:50:06.565962 kubelet[2387]: E1105 15:50:06.565905 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://24.144.92.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-5-f7907e7d84?timeout=10s\": dial tcp 24.144.92.23:6443: connect: connection refused" interval="1.6s" Nov 5 15:50:06.620379 containerd[1604]: time="2025-11-05T15:50:06.619963713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-5-f7907e7d84,Uid:a94850eb7e513c85f068357384d52457,Namespace:kube-system,Attempt:0,} returns sandbox id \"a344fae0ada0b784c851841d522a6943e41621c81ab69d4bd7f7ba6d5c88ca03\"" Nov 5 15:50:06.623389 kubelet[2387]: E1105 15:50:06.622806 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:06.632021 containerd[1604]: time="2025-11-05T15:50:06.631979671Z" level=info msg="CreateContainer within sandbox \"a344fae0ada0b784c851841d522a6943e41621c81ab69d4bd7f7ba6d5c88ca03\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:50:06.633216 containerd[1604]: time="2025-11-05T15:50:06.633163761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-5-f7907e7d84,Uid:75d89b65ddb7549acf773fbeb3e20e4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bd061d736dd5357abcc0422c2560cc422855f9b0f199dccd1f008ebaa968040\"" Nov 5 15:50:06.636260 kubelet[2387]: E1105 15:50:06.635829 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:06.647465 containerd[1604]: time="2025-11-05T15:50:06.647416642Z" level=info msg="CreateContainer within sandbox \"8bd061d736dd5357abcc0422c2560cc422855f9b0f199dccd1f008ebaa968040\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:50:06.655865 containerd[1604]: time="2025-11-05T15:50:06.655818685Z" level=info msg="Container 96d0993f0fc96017353631057eed8532f808f2c41445229ca6efa107b63fe4d4: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:06.666316 containerd[1604]: time="2025-11-05T15:50:06.666267226Z" level=info msg="Container e206570765660ec9e603f2ffd07dd3af11c4e46b9d6df78f5144a910b0dce983: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:06.669040 containerd[1604]: time="2025-11-05T15:50:06.669000489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-5-f7907e7d84,Uid:0cfb6bdcc7e72026a53aa332d7e7e4e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"47356c263934923f19b4a0342807cdcd24b922701f58dc2a0df0fbb896d5bf64\"" Nov 5 15:50:06.670267 kubelet[2387]: E1105 15:50:06.670238 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:06.674257 containerd[1604]: time="2025-11-05T15:50:06.674083561Z" level=info msg="CreateContainer within sandbox \"8bd061d736dd5357abcc0422c2560cc422855f9b0f199dccd1f008ebaa968040\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e206570765660ec9e603f2ffd07dd3af11c4e46b9d6df78f5144a910b0dce983\"" Nov 5 15:50:06.674997 containerd[1604]: time="2025-11-05T15:50:06.674949573Z" level=info msg="CreateContainer within sandbox \"47356c263934923f19b4a0342807cdcd24b922701f58dc2a0df0fbb896d5bf64\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:50:06.675586 containerd[1604]: time="2025-11-05T15:50:06.675369959Z" level=info msg="StartContainer for \"e206570765660ec9e603f2ffd07dd3af11c4e46b9d6df78f5144a910b0dce983\"" Nov 5 15:50:06.675922 containerd[1604]: time="2025-11-05T15:50:06.675899635Z" level=info msg="CreateContainer within sandbox \"a344fae0ada0b784c851841d522a6943e41621c81ab69d4bd7f7ba6d5c88ca03\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"96d0993f0fc96017353631057eed8532f808f2c41445229ca6efa107b63fe4d4\"" Nov 5 15:50:06.676941 containerd[1604]: time="2025-11-05T15:50:06.676915609Z" level=info msg="connecting to shim e206570765660ec9e603f2ffd07dd3af11c4e46b9d6df78f5144a910b0dce983" address="unix:///run/containerd/s/ed9eb8903f80aa027ea154b5c625c998d96b26e9c856a8bfd671811932c8b7ff" protocol=ttrpc version=3 Nov 5 15:50:06.677779 containerd[1604]: time="2025-11-05T15:50:06.677745849Z" level=info msg="StartContainer for \"96d0993f0fc96017353631057eed8532f808f2c41445229ca6efa107b63fe4d4\"" Nov 5 15:50:06.679026 containerd[1604]: time="2025-11-05T15:50:06.679000598Z" level=info msg="connecting to shim 96d0993f0fc96017353631057eed8532f808f2c41445229ca6efa107b63fe4d4" address="unix:///run/containerd/s/3298895fd54fcf28743d9e1281dcc3b8421fccaae4918d540b5d1ec79e745efa" protocol=ttrpc version=3 Nov 5 15:50:06.688112 containerd[1604]: time="2025-11-05T15:50:06.688060378Z" level=info msg="Container 3acc7a25e245f829a61ccb67eb86ae0e9072c7a77f0dc0d80616e81d50e984d2: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:06.699285 containerd[1604]: time="2025-11-05T15:50:06.699205167Z" level=info msg="CreateContainer within sandbox \"47356c263934923f19b4a0342807cdcd24b922701f58dc2a0df0fbb896d5bf64\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3acc7a25e245f829a61ccb67eb86ae0e9072c7a77f0dc0d80616e81d50e984d2\"" Nov 5 15:50:06.699842 containerd[1604]: time="2025-11-05T15:50:06.699819873Z" level=info msg="StartContainer for \"3acc7a25e245f829a61ccb67eb86ae0e9072c7a77f0dc0d80616e81d50e984d2\"" Nov 5 15:50:06.701407 containerd[1604]: time="2025-11-05T15:50:06.700879833Z" level=info msg="connecting to shim 3acc7a25e245f829a61ccb67eb86ae0e9072c7a77f0dc0d80616e81d50e984d2" address="unix:///run/containerd/s/a4ff1c1c84a76edd2f86b0e2a57f0900f029affe0b3c72b4ec525f366bf5c1fc" protocol=ttrpc version=3 Nov 5 15:50:06.701149 systemd[1]: Started cri-containerd-e206570765660ec9e603f2ffd07dd3af11c4e46b9d6df78f5144a910b0dce983.scope - libcontainer container e206570765660ec9e603f2ffd07dd3af11c4e46b9d6df78f5144a910b0dce983. Nov 5 15:50:06.716887 kubelet[2387]: E1105 15:50:06.716776 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://24.144.92.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:50:06.720807 systemd[1]: Started cri-containerd-96d0993f0fc96017353631057eed8532f808f2c41445229ca6efa107b63fe4d4.scope - libcontainer container 96d0993f0fc96017353631057eed8532f808f2c41445229ca6efa107b63fe4d4. Nov 5 15:50:06.746811 systemd[1]: Started cri-containerd-3acc7a25e245f829a61ccb67eb86ae0e9072c7a77f0dc0d80616e81d50e984d2.scope - libcontainer container 3acc7a25e245f829a61ccb67eb86ae0e9072c7a77f0dc0d80616e81d50e984d2. Nov 5 15:50:06.794270 kubelet[2387]: I1105 15:50:06.792564 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:06.795061 kubelet[2387]: E1105 15:50:06.794879 2387 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://24.144.92.23:6443/api/v1/nodes\": dial tcp 24.144.92.23:6443: connect: connection refused" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:06.800801 kubelet[2387]: E1105 15:50:06.800745 2387 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://24.144.92.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:50:06.805949 containerd[1604]: time="2025-11-05T15:50:06.805897780Z" level=info msg="StartContainer for \"e206570765660ec9e603f2ffd07dd3af11c4e46b9d6df78f5144a910b0dce983\" returns successfully" Nov 5 15:50:06.840672 containerd[1604]: time="2025-11-05T15:50:06.840487076Z" level=info msg="StartContainer for \"96d0993f0fc96017353631057eed8532f808f2c41445229ca6efa107b63fe4d4\" returns successfully" Nov 5 15:50:06.874402 containerd[1604]: time="2025-11-05T15:50:06.874011345Z" level=info msg="StartContainer for \"3acc7a25e245f829a61ccb67eb86ae0e9072c7a77f0dc0d80616e81d50e984d2\" returns successfully" Nov 5 15:50:07.099667 kubelet[2387]: E1105 15:50:07.099140 2387 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://24.144.92.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 24.144.92.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:50:07.238735 kubelet[2387]: E1105 15:50:07.238698 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:07.238890 kubelet[2387]: E1105 15:50:07.238869 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:07.243907 kubelet[2387]: E1105 15:50:07.243856 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:07.244046 kubelet[2387]: E1105 15:50:07.244028 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:07.245673 kubelet[2387]: E1105 15:50:07.245644 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:07.246602 kubelet[2387]: E1105 15:50:07.245816 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:08.251471 kubelet[2387]: E1105 15:50:08.251426 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:08.252344 kubelet[2387]: E1105 15:50:08.251429 2387 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:08.252344 kubelet[2387]: E1105 15:50:08.252301 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:08.252474 kubelet[2387]: E1105 15:50:08.252301 2387 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:08.397589 kubelet[2387]: I1105 15:50:08.397112 2387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:09.171769 kubelet[2387]: E1105 15:50:09.171699 2387 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.1-5-f7907e7d84\" not found" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:09.258701 kubelet[2387]: I1105 15:50:09.258655 2387 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:09.258701 kubelet[2387]: E1105 15:50:09.258699 2387 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4487.0.1-5-f7907e7d84\": node \"ci-4487.0.1-5-f7907e7d84\" not found" Nov 5 15:50:09.303362 kubelet[2387]: E1105 15:50:09.303314 2387 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.1-5-f7907e7d84\" not found" Nov 5 15:50:09.458489 kubelet[2387]: I1105 15:50:09.457966 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:09.467899 kubelet[2387]: E1105 15:50:09.467082 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-5-f7907e7d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:09.467899 kubelet[2387]: I1105 15:50:09.467136 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:09.469827 kubelet[2387]: E1105 15:50:09.469777 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-5-f7907e7d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:09.470076 kubelet[2387]: I1105 15:50:09.469921 2387 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:09.472310 kubelet[2387]: E1105 15:50:09.472260 2387 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:10.118867 kubelet[2387]: I1105 15:50:10.117842 2387 apiserver.go:52] "Watching apiserver" Nov 5 15:50:10.157479 kubelet[2387]: I1105 15:50:10.157409 2387 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 15:50:11.440630 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-7.scope)... Nov 5 15:50:11.440655 systemd[1]: Reloading... Nov 5 15:50:11.561598 zram_generator::config[2715]: No configuration found. Nov 5 15:50:11.899784 systemd[1]: Reloading finished in 458 ms. Nov 5 15:50:11.938238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:11.952177 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:50:11.953162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:11.953559 systemd[1]: kubelet.service: Consumed 1.182s CPU time, 122.7M memory peak. Nov 5 15:50:11.956388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:12.142108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:12.156087 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:50:12.238560 kubelet[2770]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:50:12.238560 kubelet[2770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:12.238981 kubelet[2770]: I1105 15:50:12.238610 2770 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:50:12.246922 kubelet[2770]: I1105 15:50:12.246297 2770 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 15:50:12.246922 kubelet[2770]: I1105 15:50:12.246335 2770 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:50:12.246922 kubelet[2770]: I1105 15:50:12.246362 2770 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 15:50:12.246922 kubelet[2770]: I1105 15:50:12.246372 2770 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:50:12.246922 kubelet[2770]: I1105 15:50:12.246903 2770 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:50:12.249450 kubelet[2770]: I1105 15:50:12.249418 2770 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:50:12.254447 kubelet[2770]: I1105 15:50:12.254385 2770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:50:12.262484 kubelet[2770]: I1105 15:50:12.262452 2770 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:50:12.266928 kubelet[2770]: I1105 15:50:12.266875 2770 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 15:50:12.267163 kubelet[2770]: I1105 15:50:12.267117 2770 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:50:12.267412 kubelet[2770]: I1105 15:50:12.267167 2770 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-5-f7907e7d84","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:50:12.267523 kubelet[2770]: I1105 15:50:12.267429 2770 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:50:12.267523 kubelet[2770]: I1105 15:50:12.267445 2770 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 15:50:12.267523 kubelet[2770]: I1105 15:50:12.267484 2770 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 15:50:12.270230 kubelet[2770]: I1105 15:50:12.270169 2770 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:12.270531 kubelet[2770]: I1105 15:50:12.270513 2770 kubelet.go:475] "Attempting to sync node with API server" Nov 5 15:50:12.270596 kubelet[2770]: I1105 15:50:12.270536 2770 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:50:12.270596 kubelet[2770]: I1105 15:50:12.270564 2770 kubelet.go:387] "Adding apiserver pod source" Nov 5 15:50:12.270675 kubelet[2770]: I1105 15:50:12.270611 2770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:50:12.280627 kubelet[2770]: I1105 15:50:12.280220 2770 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:50:12.281012 kubelet[2770]: I1105 15:50:12.280988 2770 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:50:12.281155 kubelet[2770]: I1105 15:50:12.281031 2770 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 15:50:12.292534 kubelet[2770]: I1105 15:50:12.292502 2770 server.go:1262] "Started kubelet" Nov 5 15:50:12.297276 kubelet[2770]: I1105 15:50:12.297156 2770 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:50:12.303305 kubelet[2770]: I1105 15:50:12.303148 2770 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:50:12.303492 kubelet[2770]: I1105 15:50:12.303350 2770 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 15:50:12.304550 kubelet[2770]: I1105 15:50:12.303862 2770 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:50:12.308586 kubelet[2770]: I1105 15:50:12.308053 2770 server.go:310] "Adding debug handlers to kubelet server" Nov 5 15:50:12.312697 kubelet[2770]: I1105 15:50:12.312639 2770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:50:12.316590 kubelet[2770]: I1105 15:50:12.316530 2770 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:50:12.325197 kubelet[2770]: I1105 15:50:12.323683 2770 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 15:50:12.325834 kubelet[2770]: I1105 15:50:12.325716 2770 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 15:50:12.328791 kubelet[2770]: E1105 15:50:12.328743 2770 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:50:12.331189 kubelet[2770]: I1105 15:50:12.330678 2770 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:50:12.331898 kubelet[2770]: I1105 15:50:12.331708 2770 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:50:12.334559 kubelet[2770]: I1105 15:50:12.334506 2770 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 15:50:12.340673 kubelet[2770]: I1105 15:50:12.339311 2770 reconciler.go:29] "Reconciler: start to sync state" Nov 5 15:50:12.346498 kubelet[2770]: I1105 15:50:12.345689 2770 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:50:12.369765 kubelet[2770]: I1105 15:50:12.369717 2770 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 15:50:12.369765 kubelet[2770]: I1105 15:50:12.369754 2770 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 15:50:12.369961 kubelet[2770]: I1105 15:50:12.369796 2770 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 15:50:12.369961 kubelet[2770]: E1105 15:50:12.369863 2770 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:50:12.424708 kubelet[2770]: I1105 15:50:12.424305 2770 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:50:12.424708 kubelet[2770]: I1105 15:50:12.424355 2770 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:50:12.424708 kubelet[2770]: I1105 15:50:12.424392 2770 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:12.424708 kubelet[2770]: I1105 15:50:12.424587 2770 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:50:12.424708 kubelet[2770]: I1105 15:50:12.424599 2770 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:50:12.424708 kubelet[2770]: I1105 15:50:12.424619 2770 policy_none.go:49] "None policy: Start" Nov 5 15:50:12.424708 kubelet[2770]: I1105 15:50:12.424632 2770 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 15:50:12.424708 kubelet[2770]: I1105 15:50:12.424647 2770 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 15:50:12.425117 kubelet[2770]: I1105 15:50:12.424738 2770 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 5 15:50:12.425117 kubelet[2770]: I1105 15:50:12.424748 2770 policy_none.go:47] "Start" Nov 5 15:50:12.457942 kubelet[2770]: E1105 15:50:12.457865 2770 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:50:12.460802 kubelet[2770]: I1105 15:50:12.460228 2770 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:50:12.460802 kubelet[2770]: I1105 15:50:12.460255 2770 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:50:12.461234 sudo[2808]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 5 15:50:12.461581 sudo[2808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 5 15:50:12.467651 kubelet[2770]: I1105 15:50:12.466244 2770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:50:12.470189 kubelet[2770]: E1105 15:50:12.470149 2770 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:50:12.474656 kubelet[2770]: I1105 15:50:12.474610 2770 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.476762 kubelet[2770]: I1105 15:50:12.475961 2770 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.481420 kubelet[2770]: I1105 15:50:12.479752 2770 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.495013 kubelet[2770]: I1105 15:50:12.493853 2770 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:50:12.499225 kubelet[2770]: I1105 15:50:12.498788 2770 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:50:12.501786 kubelet[2770]: I1105 15:50:12.501401 2770 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:50:12.542165 kubelet[2770]: I1105 15:50:12.542102 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.542367 kubelet[2770]: I1105 15:50:12.542205 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75d89b65ddb7549acf773fbeb3e20e4d-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-5-f7907e7d84\" (UID: \"75d89b65ddb7549acf773fbeb3e20e4d\") " pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.542367 kubelet[2770]: I1105 15:50:12.542226 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.542470 kubelet[2770]: I1105 15:50:12.542244 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.542515 kubelet[2770]: I1105 15:50:12.542477 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cfb6bdcc7e72026a53aa332d7e7e4e6-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-5-f7907e7d84\" (UID: \"0cfb6bdcc7e72026a53aa332d7e7e4e6\") " pod="kube-system/kube-scheduler-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.542515 kubelet[2770]: I1105 15:50:12.542493 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75d89b65ddb7549acf773fbeb3e20e4d-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-5-f7907e7d84\" (UID: \"75d89b65ddb7549acf773fbeb3e20e4d\") " pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.542633 kubelet[2770]: I1105 15:50:12.542527 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75d89b65ddb7549acf773fbeb3e20e4d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-5-f7907e7d84\" (UID: \"75d89b65ddb7549acf773fbeb3e20e4d\") " pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.542633 kubelet[2770]: I1105 15:50:12.542545 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.542824 kubelet[2770]: I1105 15:50:12.542562 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a94850eb7e513c85f068357384d52457-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-5-f7907e7d84\" (UID: \"a94850eb7e513c85f068357384d52457\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.571395 kubelet[2770]: I1105 15:50:12.571344 2770 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.583552 kubelet[2770]: I1105 15:50:12.583504 2770 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.583728 kubelet[2770]: I1105 15:50:12.583627 2770 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:12.795563 kubelet[2770]: E1105 15:50:12.795409 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:12.800451 kubelet[2770]: E1105 15:50:12.800399 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:12.803857 kubelet[2770]: E1105 15:50:12.803796 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:12.992895 sudo[2808]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:13.275949 kubelet[2770]: I1105 15:50:13.274817 2770 apiserver.go:52] "Watching apiserver" Nov 5 15:50:13.327441 kubelet[2770]: I1105 15:50:13.327379 2770 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 15:50:13.401771 kubelet[2770]: E1105 15:50:13.401445 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:13.403242 kubelet[2770]: E1105 15:50:13.402563 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:13.403242 kubelet[2770]: I1105 15:50:13.402671 2770 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:13.414080 kubelet[2770]: I1105 15:50:13.414033 2770 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:50:13.414852 kubelet[2770]: E1105 15:50:13.414240 2770 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-5-f7907e7d84\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" Nov 5 15:50:13.414852 kubelet[2770]: E1105 15:50:13.414410 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:13.431849 kubelet[2770]: I1105 15:50:13.431734 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.1-5-f7907e7d84" podStartSLOduration=1.431710777 podStartE2EDuration="1.431710777s" podCreationTimestamp="2025-11-05 15:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:13.414029478 +0000 UTC m=+1.247995553" watchObservedRunningTime="2025-11-05 15:50:13.431710777 +0000 UTC m=+1.265676840" Nov 5 15:50:13.432074 kubelet[2770]: I1105 15:50:13.431942 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.1-5-f7907e7d84" podStartSLOduration=1.431929948 podStartE2EDuration="1.431929948s" podCreationTimestamp="2025-11-05 15:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:13.431503212 +0000 UTC m=+1.265469283" watchObservedRunningTime="2025-11-05 15:50:13.431929948 +0000 UTC m=+1.265896016" Nov 5 15:50:13.461319 kubelet[2770]: I1105 15:50:13.461055 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.1-5-f7907e7d84" podStartSLOduration=1.460799967 podStartE2EDuration="1.460799967s" podCreationTimestamp="2025-11-05 15:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:13.445305491 +0000 UTC m=+1.279271560" watchObservedRunningTime="2025-11-05 15:50:13.460799967 +0000 UTC m=+1.294766033" Nov 5 15:50:14.405284 kubelet[2770]: E1105 15:50:14.405209 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:14.405867 kubelet[2770]: E1105 15:50:14.405836 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:14.790883 sudo[1838]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:14.796626 sshd[1837]: Connection closed by 139.178.68.195 port 45096 Nov 5 15:50:14.795775 sshd-session[1834]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:14.802373 systemd[1]: sshd@6-24.144.92.23:22-139.178.68.195:45096.service: Deactivated successfully. Nov 5 15:50:14.802642 systemd-logind[1571]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:50:14.807453 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:50:14.807825 systemd[1]: session-7.scope: Consumed 8.071s CPU time, 225M memory peak. Nov 5 15:50:14.811953 systemd-logind[1571]: Removed session 7. Nov 5 15:50:16.270479 kubelet[2770]: E1105 15:50:16.269987 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:16.410514 kubelet[2770]: E1105 15:50:16.410470 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:17.659030 systemd-timesyncd[1464]: Contacted time server 67.217.246.204:123 (2.flatcar.pool.ntp.org). Nov 5 15:50:17.659144 systemd-resolved[1282]: Clock change detected. Flushing caches. Nov 5 15:50:17.659712 systemd-timesyncd[1464]: Initial clock synchronization to Wed 2025-11-05 15:50:17.658526 UTC. Nov 5 15:50:18.025127 kubelet[2770]: I1105 15:50:18.024799 2770 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:50:18.025928 containerd[1604]: time="2025-11-05T15:50:18.025867169Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:50:18.026699 kubelet[2770]: I1105 15:50:18.026634 2770 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:50:18.813159 kubelet[2770]: E1105 15:50:18.813111 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:19.040389 systemd[1]: Created slice kubepods-besteffort-podbf9ef90e_7987_42b3_b78e_1d278295c445.slice - libcontainer container kubepods-besteffort-podbf9ef90e_7987_42b3_b78e_1d278295c445.slice. Nov 5 15:50:19.054251 systemd[1]: Created slice kubepods-burstable-pod10137070_5223_4ae1_9532_e98b1ec3284f.slice - libcontainer container kubepods-burstable-pod10137070_5223_4ae1_9532_e98b1ec3284f.slice. Nov 5 15:50:19.066813 kubelet[2770]: I1105 15:50:19.066255 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wb7r\" (UniqueName: \"kubernetes.io/projected/10137070-5223-4ae1-9532-e98b1ec3284f-kube-api-access-9wb7r\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.067312 kubelet[2770]: I1105 15:50:19.066865 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-bpf-maps\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.067312 kubelet[2770]: I1105 15:50:19.067154 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cni-path\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.067662 kubelet[2770]: I1105 15:50:19.067181 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10137070-5223-4ae1-9532-e98b1ec3284f-hubble-tls\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.068763 kubelet[2770]: I1105 15:50:19.068699 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf9ef90e-7987-42b3-b78e-1d278295c445-lib-modules\") pod \"kube-proxy-48crn\" (UID: \"bf9ef90e-7987-42b3-b78e-1d278295c445\") " pod="kube-system/kube-proxy-48crn" Nov 5 15:50:19.068926 kubelet[2770]: I1105 15:50:19.068855 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-hostproc\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.068926 kubelet[2770]: I1105 15:50:19.068876 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-lib-modules\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.069186 kubelet[2770]: I1105 15:50:19.069055 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-xtables-lock\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.069334 kubelet[2770]: I1105 15:50:19.069289 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf9ef90e-7987-42b3-b78e-1d278295c445-xtables-lock\") pod \"kube-proxy-48crn\" (UID: \"bf9ef90e-7987-42b3-b78e-1d278295c445\") " pod="kube-system/kube-proxy-48crn" Nov 5 15:50:19.069440 kubelet[2770]: I1105 15:50:19.069423 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-cgroup\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.069571 kubelet[2770]: I1105 15:50:19.069490 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-etc-cni-netd\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.069571 kubelet[2770]: I1105 15:50:19.069512 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10137070-5223-4ae1-9532-e98b1ec3284f-clustermesh-secrets\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.069571 kubelet[2770]: I1105 15:50:19.069532 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-config-path\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.070192 kubelet[2770]: I1105 15:50:19.070017 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-host-proc-sys-kernel\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.070192 kubelet[2770]: I1105 15:50:19.070044 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf9ef90e-7987-42b3-b78e-1d278295c445-kube-proxy\") pod \"kube-proxy-48crn\" (UID: \"bf9ef90e-7987-42b3-b78e-1d278295c445\") " pod="kube-system/kube-proxy-48crn" Nov 5 15:50:19.070192 kubelet[2770]: I1105 15:50:19.070058 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kdhb\" (UniqueName: \"kubernetes.io/projected/bf9ef90e-7987-42b3-b78e-1d278295c445-kube-api-access-4kdhb\") pod \"kube-proxy-48crn\" (UID: \"bf9ef90e-7987-42b3-b78e-1d278295c445\") " pod="kube-system/kube-proxy-48crn" Nov 5 15:50:19.070192 kubelet[2770]: I1105 15:50:19.070081 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-run\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.070192 kubelet[2770]: I1105 15:50:19.070137 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-host-proc-sys-net\") pod \"cilium-xjsm2\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " pod="kube-system/cilium-xjsm2" Nov 5 15:50:19.206625 systemd[1]: Created slice kubepods-besteffort-pod09f1bbf5_fb38_49ef_80ed_eec3e64a2f8a.slice - libcontainer container kubepods-besteffort-pod09f1bbf5_fb38_49ef_80ed_eec3e64a2f8a.slice. Nov 5 15:50:19.272625 kubelet[2770]: I1105 15:50:19.272549 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-bsrmq\" (UID: \"09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a\") " pod="kube-system/cilium-operator-6f9c7c5859-bsrmq" Nov 5 15:50:19.272625 kubelet[2770]: I1105 15:50:19.272605 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmjv4\" (UniqueName: \"kubernetes.io/projected/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a-kube-api-access-gmjv4\") pod \"cilium-operator-6f9c7c5859-bsrmq\" (UID: \"09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a\") " pod="kube-system/cilium-operator-6f9c7c5859-bsrmq" Nov 5 15:50:19.352116 kubelet[2770]: E1105 15:50:19.351960 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:19.354124 containerd[1604]: time="2025-11-05T15:50:19.354075403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48crn,Uid:bf9ef90e-7987-42b3-b78e-1d278295c445,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:19.361580 kubelet[2770]: E1105 15:50:19.361528 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:19.367679 containerd[1604]: time="2025-11-05T15:50:19.363143167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xjsm2,Uid:10137070-5223-4ae1-9532-e98b1ec3284f,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:19.385145 containerd[1604]: time="2025-11-05T15:50:19.384997306Z" level=info msg="connecting to shim 8d1d87a1d7ba71883413fbb8e28f4023fed7c28026297e98bf028d148e352e5c" address="unix:///run/containerd/s/7bfc97eebe41eda8a5cb2eb10443e00362a476468e6300d13349a51b36891b19" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:19.412094 containerd[1604]: time="2025-11-05T15:50:19.412019027Z" level=info msg="connecting to shim df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6" address="unix:///run/containerd/s/568b8d2c62143a02572186a1e8692a3eaea1bf9f42e3e891cb9525cdda92e49f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:19.423904 systemd[1]: Started cri-containerd-8d1d87a1d7ba71883413fbb8e28f4023fed7c28026297e98bf028d148e352e5c.scope - libcontainer container 8d1d87a1d7ba71883413fbb8e28f4023fed7c28026297e98bf028d148e352e5c. Nov 5 15:50:19.449996 systemd[1]: Started cri-containerd-df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6.scope - libcontainer container df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6. Nov 5 15:50:19.497381 kubelet[2770]: E1105 15:50:19.497341 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:19.507001 containerd[1604]: time="2025-11-05T15:50:19.506235960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-48crn,Uid:bf9ef90e-7987-42b3-b78e-1d278295c445,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d1d87a1d7ba71883413fbb8e28f4023fed7c28026297e98bf028d148e352e5c\"" Nov 5 15:50:19.515000 kubelet[2770]: E1105 15:50:19.511923 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:19.515000 kubelet[2770]: E1105 15:50:19.514589 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:19.519094 containerd[1604]: time="2025-11-05T15:50:19.517908459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-bsrmq,Uid:09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:19.525852 containerd[1604]: time="2025-11-05T15:50:19.525811724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xjsm2,Uid:10137070-5223-4ae1-9532-e98b1ec3284f,Namespace:kube-system,Attempt:0,} returns sandbox id \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\"" Nov 5 15:50:19.528313 containerd[1604]: time="2025-11-05T15:50:19.528240353Z" level=info msg="CreateContainer within sandbox \"8d1d87a1d7ba71883413fbb8e28f4023fed7c28026297e98bf028d148e352e5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:50:19.529534 kubelet[2770]: E1105 15:50:19.529498 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:19.532906 containerd[1604]: time="2025-11-05T15:50:19.532327870Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 5 15:50:19.545321 containerd[1604]: time="2025-11-05T15:50:19.545248961Z" level=info msg="Container 70720a659197165e8d9778c14afe3dbd78e23cd94687bfdf0dda9d902b2b251c: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:19.558375 containerd[1604]: time="2025-11-05T15:50:19.557786445Z" level=info msg="connecting to shim dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1" address="unix:///run/containerd/s/1c53ef3ccef60d488c048d0c4c966614fec526f1f684d5d0aee8a9fc5b09748c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:19.558518 containerd[1604]: time="2025-11-05T15:50:19.558443299Z" level=info msg="CreateContainer within sandbox \"8d1d87a1d7ba71883413fbb8e28f4023fed7c28026297e98bf028d148e352e5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70720a659197165e8d9778c14afe3dbd78e23cd94687bfdf0dda9d902b2b251c\"" Nov 5 15:50:19.561384 containerd[1604]: time="2025-11-05T15:50:19.561339391Z" level=info msg="StartContainer for \"70720a659197165e8d9778c14afe3dbd78e23cd94687bfdf0dda9d902b2b251c\"" Nov 5 15:50:19.563206 containerd[1604]: time="2025-11-05T15:50:19.563162929Z" level=info msg="connecting to shim 70720a659197165e8d9778c14afe3dbd78e23cd94687bfdf0dda9d902b2b251c" address="unix:///run/containerd/s/7bfc97eebe41eda8a5cb2eb10443e00362a476468e6300d13349a51b36891b19" protocol=ttrpc version=3 Nov 5 15:50:19.590966 systemd[1]: Started cri-containerd-70720a659197165e8d9778c14afe3dbd78e23cd94687bfdf0dda9d902b2b251c.scope - libcontainer container 70720a659197165e8d9778c14afe3dbd78e23cd94687bfdf0dda9d902b2b251c. Nov 5 15:50:19.603924 systemd[1]: Started cri-containerd-dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1.scope - libcontainer container dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1. Nov 5 15:50:19.685142 containerd[1604]: time="2025-11-05T15:50:19.685097798Z" level=info msg="StartContainer for \"70720a659197165e8d9778c14afe3dbd78e23cd94687bfdf0dda9d902b2b251c\" returns successfully" Nov 5 15:50:19.703811 containerd[1604]: time="2025-11-05T15:50:19.703768960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-bsrmq,Uid:09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\"" Nov 5 15:50:19.705628 kubelet[2770]: E1105 15:50:19.705594 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:20.504808 kubelet[2770]: E1105 15:50:20.504764 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:20.520752 kubelet[2770]: I1105 15:50:20.520577 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-48crn" podStartSLOduration=2.520554927 podStartE2EDuration="2.520554927s" podCreationTimestamp="2025-11-05 15:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:20.518414468 +0000 UTC m=+7.271302962" watchObservedRunningTime="2025-11-05 15:50:20.520554927 +0000 UTC m=+7.273443400" Nov 5 15:50:22.410192 kubelet[2770]: E1105 15:50:22.409835 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:22.512957 kubelet[2770]: E1105 15:50:22.512866 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:23.515746 kubelet[2770]: E1105 15:50:23.515703 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:24.606354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834884853.mount: Deactivated successfully. Nov 5 15:50:27.315670 update_engine[1572]: I20251105 15:50:27.314713 1572 update_attempter.cc:509] Updating boot flags... Nov 5 15:50:27.514203 containerd[1604]: time="2025-11-05T15:50:27.512902704Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 5 15:50:27.521595 containerd[1604]: time="2025-11-05T15:50:27.521478226Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:27.523665 containerd[1604]: time="2025-11-05T15:50:27.523537720Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.991141161s" Nov 5 15:50:27.523665 containerd[1604]: time="2025-11-05T15:50:27.523604668Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 5 15:50:27.528883 containerd[1604]: time="2025-11-05T15:50:27.528160614Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:27.530462 containerd[1604]: time="2025-11-05T15:50:27.530404301Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 5 15:50:27.537842 containerd[1604]: time="2025-11-05T15:50:27.537474799Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 15:50:27.583106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606233199.mount: Deactivated successfully. Nov 5 15:50:27.600130 containerd[1604]: time="2025-11-05T15:50:27.592536901Z" level=info msg="Container b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:27.620674 containerd[1604]: time="2025-11-05T15:50:27.619207325Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\"" Nov 5 15:50:27.623887 containerd[1604]: time="2025-11-05T15:50:27.623852164Z" level=info msg="StartContainer for \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\"" Nov 5 15:50:27.630459 containerd[1604]: time="2025-11-05T15:50:27.630411551Z" level=info msg="connecting to shim b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5" address="unix:///run/containerd/s/568b8d2c62143a02572186a1e8692a3eaea1bf9f42e3e891cb9525cdda92e49f" protocol=ttrpc version=3 Nov 5 15:50:27.686963 systemd[1]: Started cri-containerd-b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5.scope - libcontainer container b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5. Nov 5 15:50:27.790065 containerd[1604]: time="2025-11-05T15:50:27.789740969Z" level=info msg="StartContainer for \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\" returns successfully" Nov 5 15:50:27.813570 systemd[1]: cri-containerd-b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5.scope: Deactivated successfully. Nov 5 15:50:27.827507 containerd[1604]: time="2025-11-05T15:50:27.827440861Z" level=info msg="received exit event container_id:\"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\" id:\"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\" pid:3206 exited_at:{seconds:1762357827 nanos:817360862}" Nov 5 15:50:27.843064 containerd[1604]: time="2025-11-05T15:50:27.842528726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\" id:\"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\" pid:3206 exited_at:{seconds:1762357827 nanos:817360862}" Nov 5 15:50:28.536786 kubelet[2770]: E1105 15:50:28.536206 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:28.543728 containerd[1604]: time="2025-11-05T15:50:28.543394878Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 15:50:28.557959 containerd[1604]: time="2025-11-05T15:50:28.557358305Z" level=info msg="Container 810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:28.579878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5-rootfs.mount: Deactivated successfully. Nov 5 15:50:28.582574 containerd[1604]: time="2025-11-05T15:50:28.581898367Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\"" Nov 5 15:50:28.583722 containerd[1604]: time="2025-11-05T15:50:28.582898068Z" level=info msg="StartContainer for \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\"" Nov 5 15:50:28.584140 containerd[1604]: time="2025-11-05T15:50:28.584108905Z" level=info msg="connecting to shim 810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6" address="unix:///run/containerd/s/568b8d2c62143a02572186a1e8692a3eaea1bf9f42e3e891cb9525cdda92e49f" protocol=ttrpc version=3 Nov 5 15:50:28.613960 systemd[1]: Started cri-containerd-810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6.scope - libcontainer container 810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6. Nov 5 15:50:28.658905 containerd[1604]: time="2025-11-05T15:50:28.658855402Z" level=info msg="StartContainer for \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\" returns successfully" Nov 5 15:50:28.679129 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:50:28.679585 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:28.680124 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:50:28.683092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:50:28.687360 systemd[1]: cri-containerd-810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6.scope: Deactivated successfully. Nov 5 15:50:28.690679 containerd[1604]: time="2025-11-05T15:50:28.689541507Z" level=info msg="received exit event container_id:\"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\" id:\"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\" pid:3254 exited_at:{seconds:1762357828 nanos:689027775}" Nov 5 15:50:28.690679 containerd[1604]: time="2025-11-05T15:50:28.689715243Z" level=info msg="TaskExit event in podsandbox handler container_id:\"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\" id:\"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\" pid:3254 exited_at:{seconds:1762357828 nanos:689027775}" Nov 5 15:50:28.721373 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:28.734385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6-rootfs.mount: Deactivated successfully. Nov 5 15:50:29.544684 kubelet[2770]: E1105 15:50:29.542799 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:29.553259 containerd[1604]: time="2025-11-05T15:50:29.551481963Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 15:50:29.598680 containerd[1604]: time="2025-11-05T15:50:29.596967957Z" level=info msg="Container 76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:29.600977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514244680.mount: Deactivated successfully. Nov 5 15:50:29.609844 containerd[1604]: time="2025-11-05T15:50:29.609783101Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:29.611784 containerd[1604]: time="2025-11-05T15:50:29.611737642Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\"" Nov 5 15:50:29.612342 containerd[1604]: time="2025-11-05T15:50:29.611980789Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 5 15:50:29.613041 containerd[1604]: time="2025-11-05T15:50:29.612993165Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:29.614160 containerd[1604]: time="2025-11-05T15:50:29.614056845Z" level=info msg="StartContainer for \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\"" Nov 5 15:50:29.617557 containerd[1604]: time="2025-11-05T15:50:29.616075560Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.085139856s" Nov 5 15:50:29.617557 containerd[1604]: time="2025-11-05T15:50:29.616125710Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 5 15:50:29.619496 containerd[1604]: time="2025-11-05T15:50:29.619450168Z" level=info msg="connecting to shim 76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c" address="unix:///run/containerd/s/568b8d2c62143a02572186a1e8692a3eaea1bf9f42e3e891cb9525cdda92e49f" protocol=ttrpc version=3 Nov 5 15:50:29.630596 containerd[1604]: time="2025-11-05T15:50:29.630522662Z" level=info msg="CreateContainer within sandbox \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 5 15:50:29.649870 containerd[1604]: time="2025-11-05T15:50:29.649829049Z" level=info msg="Container e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:29.666822 containerd[1604]: time="2025-11-05T15:50:29.666755327Z" level=info msg="CreateContainer within sandbox \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\"" Nov 5 15:50:29.668054 containerd[1604]: time="2025-11-05T15:50:29.667996350Z" level=info msg="StartContainer for \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\"" Nov 5 15:50:29.668914 systemd[1]: Started cri-containerd-76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c.scope - libcontainer container 76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c. Nov 5 15:50:29.675250 containerd[1604]: time="2025-11-05T15:50:29.674234247Z" level=info msg="connecting to shim e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3" address="unix:///run/containerd/s/1c53ef3ccef60d488c048d0c4c966614fec526f1f684d5d0aee8a9fc5b09748c" protocol=ttrpc version=3 Nov 5 15:50:29.721904 systemd[1]: Started cri-containerd-e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3.scope - libcontainer container e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3. Nov 5 15:50:29.775134 systemd[1]: cri-containerd-76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c.scope: Deactivated successfully. Nov 5 15:50:29.775423 systemd[1]: cri-containerd-76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c.scope: Consumed 41ms CPU time, 5.6M memory peak, 1M read from disk. Nov 5 15:50:29.779741 containerd[1604]: time="2025-11-05T15:50:29.779258062Z" level=info msg="received exit event container_id:\"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\" id:\"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\" pid:3314 exited_at:{seconds:1762357829 nanos:778827786}" Nov 5 15:50:29.779741 containerd[1604]: time="2025-11-05T15:50:29.779421832Z" level=info msg="StartContainer for \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\" returns successfully" Nov 5 15:50:29.779741 containerd[1604]: time="2025-11-05T15:50:29.779706727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\" id:\"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\" pid:3314 exited_at:{seconds:1762357829 nanos:778827786}" Nov 5 15:50:29.799030 containerd[1604]: time="2025-11-05T15:50:29.798909476Z" level=info msg="StartContainer for \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" returns successfully" Nov 5 15:50:30.557444 kubelet[2770]: E1105 15:50:30.557319 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:30.570714 kubelet[2770]: E1105 15:50:30.570137 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:30.591713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c-rootfs.mount: Deactivated successfully. Nov 5 15:50:30.607309 containerd[1604]: time="2025-11-05T15:50:30.606835896Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 15:50:30.617682 containerd[1604]: time="2025-11-05T15:50:30.615775016Z" level=info msg="Container 14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:30.621418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217220934.mount: Deactivated successfully. Nov 5 15:50:30.629190 containerd[1604]: time="2025-11-05T15:50:30.629029492Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\"" Nov 5 15:50:30.632448 containerd[1604]: time="2025-11-05T15:50:30.632338793Z" level=info msg="StartContainer for \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\"" Nov 5 15:50:30.633666 containerd[1604]: time="2025-11-05T15:50:30.633607300Z" level=info msg="connecting to shim 14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e" address="unix:///run/containerd/s/568b8d2c62143a02572186a1e8692a3eaea1bf9f42e3e891cb9525cdda92e49f" protocol=ttrpc version=3 Nov 5 15:50:30.692120 systemd[1]: Started cri-containerd-14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e.scope - libcontainer container 14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e. Nov 5 15:50:30.789677 containerd[1604]: time="2025-11-05T15:50:30.789470266Z" level=info msg="StartContainer for \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\" returns successfully" Nov 5 15:50:30.790606 systemd[1]: cri-containerd-14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e.scope: Deactivated successfully. Nov 5 15:50:30.793982 containerd[1604]: time="2025-11-05T15:50:30.793064582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\" id:\"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\" pid:3388 exited_at:{seconds:1762357830 nanos:792690686}" Nov 5 15:50:30.793982 containerd[1604]: time="2025-11-05T15:50:30.793211343Z" level=info msg="received exit event container_id:\"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\" id:\"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\" pid:3388 exited_at:{seconds:1762357830 nanos:792690686}" Nov 5 15:50:30.843105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e-rootfs.mount: Deactivated successfully. Nov 5 15:50:31.578697 kubelet[2770]: E1105 15:50:31.578246 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:31.579934 kubelet[2770]: E1105 15:50:31.579794 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:31.589838 containerd[1604]: time="2025-11-05T15:50:31.589627540Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 15:50:31.605848 kubelet[2770]: I1105 15:50:31.602944 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-bsrmq" podStartSLOduration=2.692263795 podStartE2EDuration="12.602923972s" podCreationTimestamp="2025-11-05 15:50:19 +0000 UTC" firstStartedPulling="2025-11-05 15:50:19.707276105 +0000 UTC m=+6.460164579" lastFinishedPulling="2025-11-05 15:50:29.617936268 +0000 UTC m=+16.370824756" observedRunningTime="2025-11-05 15:50:30.823987516 +0000 UTC m=+17.576876010" watchObservedRunningTime="2025-11-05 15:50:31.602923972 +0000 UTC m=+18.355812465" Nov 5 15:50:31.648975 containerd[1604]: time="2025-11-05T15:50:31.648906517Z" level=info msg="Container 61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:31.648949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount358647599.mount: Deactivated successfully. Nov 5 15:50:31.657561 containerd[1604]: time="2025-11-05T15:50:31.657469170Z" level=info msg="CreateContainer within sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\"" Nov 5 15:50:31.658391 containerd[1604]: time="2025-11-05T15:50:31.658344195Z" level=info msg="StartContainer for \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\"" Nov 5 15:50:31.663623 containerd[1604]: time="2025-11-05T15:50:31.663514460Z" level=info msg="connecting to shim 61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab" address="unix:///run/containerd/s/568b8d2c62143a02572186a1e8692a3eaea1bf9f42e3e891cb9525cdda92e49f" protocol=ttrpc version=3 Nov 5 15:50:31.689908 systemd[1]: Started cri-containerd-61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab.scope - libcontainer container 61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab. Nov 5 15:50:31.734444 containerd[1604]: time="2025-11-05T15:50:31.734339338Z" level=info msg="StartContainer for \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" returns successfully" Nov 5 15:50:31.861332 containerd[1604]: time="2025-11-05T15:50:31.861288842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" id:\"4d28fe2043aa5faeef03be406c700054146abb1f0651eb227bb1253ffffe5461\" pid:3456 exited_at:{seconds:1762357831 nanos:860738398}" Nov 5 15:50:31.930305 kubelet[2770]: I1105 15:50:31.930043 2770 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 5 15:50:32.007151 systemd[1]: Created slice kubepods-burstable-pod49e5993b_308c_4277_9128_0220771cc7d7.slice - libcontainer container kubepods-burstable-pod49e5993b_308c_4277_9128_0220771cc7d7.slice. Nov 5 15:50:32.017532 systemd[1]: Created slice kubepods-burstable-pod8bf30910_9f42_4031_b0d3_98322f6b22bd.slice - libcontainer container kubepods-burstable-pod8bf30910_9f42_4031_b0d3_98322f6b22bd.slice. Nov 5 15:50:32.062972 kubelet[2770]: I1105 15:50:32.062927 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bf30910-9f42-4031-b0d3-98322f6b22bd-config-volume\") pod \"coredns-66bc5c9577-wc4gx\" (UID: \"8bf30910-9f42-4031-b0d3-98322f6b22bd\") " pod="kube-system/coredns-66bc5c9577-wc4gx" Nov 5 15:50:32.062972 kubelet[2770]: I1105 15:50:32.062969 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49e5993b-308c-4277-9128-0220771cc7d7-config-volume\") pod \"coredns-66bc5c9577-fkv7c\" (UID: \"49e5993b-308c-4277-9128-0220771cc7d7\") " pod="kube-system/coredns-66bc5c9577-fkv7c" Nov 5 15:50:32.063178 kubelet[2770]: I1105 15:50:32.063014 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zss8w\" (UniqueName: \"kubernetes.io/projected/49e5993b-308c-4277-9128-0220771cc7d7-kube-api-access-zss8w\") pod \"coredns-66bc5c9577-fkv7c\" (UID: \"49e5993b-308c-4277-9128-0220771cc7d7\") " pod="kube-system/coredns-66bc5c9577-fkv7c" Nov 5 15:50:32.063178 kubelet[2770]: I1105 15:50:32.063039 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wrn\" (UniqueName: \"kubernetes.io/projected/8bf30910-9f42-4031-b0d3-98322f6b22bd-kube-api-access-c7wrn\") pod \"coredns-66bc5c9577-wc4gx\" (UID: \"8bf30910-9f42-4031-b0d3-98322f6b22bd\") " pod="kube-system/coredns-66bc5c9577-wc4gx" Nov 5 15:50:32.315501 kubelet[2770]: E1105 15:50:32.315228 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:32.317467 containerd[1604]: time="2025-11-05T15:50:32.316885243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fkv7c,Uid:49e5993b-308c-4277-9128-0220771cc7d7,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:32.325476 kubelet[2770]: E1105 15:50:32.325123 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:32.326671 containerd[1604]: time="2025-11-05T15:50:32.325727939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wc4gx,Uid:8bf30910-9f42-4031-b0d3-98322f6b22bd,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:32.591264 kubelet[2770]: E1105 15:50:32.591138 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:32.620024 kubelet[2770]: I1105 15:50:32.619942 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xjsm2" podStartSLOduration=6.623761332 podStartE2EDuration="14.619889681s" podCreationTimestamp="2025-11-05 15:50:18 +0000 UTC" firstStartedPulling="2025-11-05 15:50:19.531897136 +0000 UTC m=+6.284785614" lastFinishedPulling="2025-11-05 15:50:27.528025477 +0000 UTC m=+14.280913963" observedRunningTime="2025-11-05 15:50:32.619119957 +0000 UTC m=+19.372008451" watchObservedRunningTime="2025-11-05 15:50:32.619889681 +0000 UTC m=+19.372778178" Nov 5 15:50:33.593696 kubelet[2770]: E1105 15:50:33.593443 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:34.164523 systemd-networkd[1499]: cilium_host: Link UP Nov 5 15:50:34.164673 systemd-networkd[1499]: cilium_net: Link UP Nov 5 15:50:34.164822 systemd-networkd[1499]: cilium_net: Gained carrier Nov 5 15:50:34.164946 systemd-networkd[1499]: cilium_host: Gained carrier Nov 5 15:50:34.307372 systemd-networkd[1499]: cilium_vxlan: Link UP Nov 5 15:50:34.308147 systemd-networkd[1499]: cilium_vxlan: Gained carrier Nov 5 15:50:34.542896 systemd-networkd[1499]: cilium_net: Gained IPv6LL Nov 5 15:50:34.595464 kubelet[2770]: E1105 15:50:34.595426 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:34.702699 kernel: NET: Registered PF_ALG protocol family Nov 5 15:50:35.022943 systemd-networkd[1499]: cilium_host: Gained IPv6LL Nov 5 15:50:35.551817 systemd-networkd[1499]: lxc_health: Link UP Nov 5 15:50:35.554802 systemd-networkd[1499]: lxc_health: Gained carrier Nov 5 15:50:35.882834 kernel: eth0: renamed from tmp4c395 Nov 5 15:50:35.882791 systemd-networkd[1499]: lxca40290fb3beb: Link UP Nov 5 15:50:35.886352 systemd-networkd[1499]: lxca40290fb3beb: Gained carrier Nov 5 15:50:35.915015 systemd-networkd[1499]: lxcdaaf33b2fa00: Link UP Nov 5 15:50:35.924109 kernel: eth0: renamed from tmp2f0c9 Nov 5 15:50:35.929796 systemd-networkd[1499]: lxcdaaf33b2fa00: Gained carrier Nov 5 15:50:36.239887 systemd-networkd[1499]: cilium_vxlan: Gained IPv6LL Nov 5 15:50:37.006868 systemd-networkd[1499]: lxca40290fb3beb: Gained IPv6LL Nov 5 15:50:37.198862 systemd-networkd[1499]: lxcdaaf33b2fa00: Gained IPv6LL Nov 5 15:50:37.364309 kubelet[2770]: E1105 15:50:37.364249 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:37.455909 systemd-networkd[1499]: lxc_health: Gained IPv6LL Nov 5 15:50:37.605105 kubelet[2770]: E1105 15:50:37.604441 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:38.606502 kubelet[2770]: E1105 15:50:38.606455 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:40.300688 containerd[1604]: time="2025-11-05T15:50:40.298051796Z" level=info msg="connecting to shim 2f0c9f92cd24a66046920ac80413ebfe554f448fde8bc98ef14f3dd446ced912" address="unix:///run/containerd/s/8713691946d91962671116398fa01205bfa0a63d36fb51268c8a3dc5621c06d8" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:40.350978 containerd[1604]: time="2025-11-05T15:50:40.350936389Z" level=info msg="connecting to shim 4c395def52abcde0aeaadfb5727f953e339d867b8b10e2af9b97bd4b926f3906" address="unix:///run/containerd/s/9993c5470e4ba1a5c4251ecc9f2039a78a6efd5f145efdefba202bd4b422b81f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:40.356989 systemd[1]: Started cri-containerd-2f0c9f92cd24a66046920ac80413ebfe554f448fde8bc98ef14f3dd446ced912.scope - libcontainer container 2f0c9f92cd24a66046920ac80413ebfe554f448fde8bc98ef14f3dd446ced912. Nov 5 15:50:40.407709 systemd[1]: Started cri-containerd-4c395def52abcde0aeaadfb5727f953e339d867b8b10e2af9b97bd4b926f3906.scope - libcontainer container 4c395def52abcde0aeaadfb5727f953e339d867b8b10e2af9b97bd4b926f3906. Nov 5 15:50:40.508018 containerd[1604]: time="2025-11-05T15:50:40.507963370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-fkv7c,Uid:49e5993b-308c-4277-9128-0220771cc7d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c395def52abcde0aeaadfb5727f953e339d867b8b10e2af9b97bd4b926f3906\"" Nov 5 15:50:40.511723 kubelet[2770]: E1105 15:50:40.511689 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:40.518775 containerd[1604]: time="2025-11-05T15:50:40.518731550Z" level=info msg="CreateContainer within sandbox \"4c395def52abcde0aeaadfb5727f953e339d867b8b10e2af9b97bd4b926f3906\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:50:40.521022 containerd[1604]: time="2025-11-05T15:50:40.520985955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wc4gx,Uid:8bf30910-9f42-4031-b0d3-98322f6b22bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f0c9f92cd24a66046920ac80413ebfe554f448fde8bc98ef14f3dd446ced912\"" Nov 5 15:50:40.524001 kubelet[2770]: E1105 15:50:40.523503 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:40.534737 containerd[1604]: time="2025-11-05T15:50:40.533813312Z" level=info msg="CreateContainer within sandbox \"2f0c9f92cd24a66046920ac80413ebfe554f448fde8bc98ef14f3dd446ced912\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:50:40.546670 containerd[1604]: time="2025-11-05T15:50:40.545918855Z" level=info msg="Container 4c9a336d83774d4bf5b63c46477aee37efbe7c1897c3d85c5b5bf2fb386fe406: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:40.551388 containerd[1604]: time="2025-11-05T15:50:40.551294470Z" level=info msg="Container 69143ed8b1a487060bcfa6c8ac229f88003c990894fcb4c396f8a2776780fc14: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:40.556600 containerd[1604]: time="2025-11-05T15:50:40.556541018Z" level=info msg="CreateContainer within sandbox \"2f0c9f92cd24a66046920ac80413ebfe554f448fde8bc98ef14f3dd446ced912\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69143ed8b1a487060bcfa6c8ac229f88003c990894fcb4c396f8a2776780fc14\"" Nov 5 15:50:40.558113 containerd[1604]: time="2025-11-05T15:50:40.558056235Z" level=info msg="StartContainer for \"69143ed8b1a487060bcfa6c8ac229f88003c990894fcb4c396f8a2776780fc14\"" Nov 5 15:50:40.559688 containerd[1604]: time="2025-11-05T15:50:40.559634832Z" level=info msg="connecting to shim 69143ed8b1a487060bcfa6c8ac229f88003c990894fcb4c396f8a2776780fc14" address="unix:///run/containerd/s/8713691946d91962671116398fa01205bfa0a63d36fb51268c8a3dc5621c06d8" protocol=ttrpc version=3 Nov 5 15:50:40.562615 containerd[1604]: time="2025-11-05T15:50:40.562567140Z" level=info msg="CreateContainer within sandbox \"4c395def52abcde0aeaadfb5727f953e339d867b8b10e2af9b97bd4b926f3906\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c9a336d83774d4bf5b63c46477aee37efbe7c1897c3d85c5b5bf2fb386fe406\"" Nov 5 15:50:40.567076 containerd[1604]: time="2025-11-05T15:50:40.567038823Z" level=info msg="StartContainer for \"4c9a336d83774d4bf5b63c46477aee37efbe7c1897c3d85c5b5bf2fb386fe406\"" Nov 5 15:50:40.570467 containerd[1604]: time="2025-11-05T15:50:40.570099119Z" level=info msg="connecting to shim 4c9a336d83774d4bf5b63c46477aee37efbe7c1897c3d85c5b5bf2fb386fe406" address="unix:///run/containerd/s/9993c5470e4ba1a5c4251ecc9f2039a78a6efd5f145efdefba202bd4b422b81f" protocol=ttrpc version=3 Nov 5 15:50:40.595909 systemd[1]: Started cri-containerd-69143ed8b1a487060bcfa6c8ac229f88003c990894fcb4c396f8a2776780fc14.scope - libcontainer container 69143ed8b1a487060bcfa6c8ac229f88003c990894fcb4c396f8a2776780fc14. Nov 5 15:50:40.604049 systemd[1]: Started cri-containerd-4c9a336d83774d4bf5b63c46477aee37efbe7c1897c3d85c5b5bf2fb386fe406.scope - libcontainer container 4c9a336d83774d4bf5b63c46477aee37efbe7c1897c3d85c5b5bf2fb386fe406. Nov 5 15:50:40.667611 containerd[1604]: time="2025-11-05T15:50:40.667049860Z" level=info msg="StartContainer for \"4c9a336d83774d4bf5b63c46477aee37efbe7c1897c3d85c5b5bf2fb386fe406\" returns successfully" Nov 5 15:50:40.673863 containerd[1604]: time="2025-11-05T15:50:40.673813137Z" level=info msg="StartContainer for \"69143ed8b1a487060bcfa6c8ac229f88003c990894fcb4c396f8a2776780fc14\" returns successfully" Nov 5 15:50:41.277117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1769936821.mount: Deactivated successfully. Nov 5 15:50:41.639093 kubelet[2770]: E1105 15:50:41.638125 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:41.643701 kubelet[2770]: E1105 15:50:41.643497 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:41.673516 kubelet[2770]: I1105 15:50:41.671111 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wc4gx" podStartSLOduration=22.671051727 podStartE2EDuration="22.671051727s" podCreationTimestamp="2025-11-05 15:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:41.66918804 +0000 UTC m=+28.422076534" watchObservedRunningTime="2025-11-05 15:50:41.671051727 +0000 UTC m=+28.423940220" Nov 5 15:50:41.720033 kubelet[2770]: I1105 15:50:41.719885 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-fkv7c" podStartSLOduration=22.719863781 podStartE2EDuration="22.719863781s" podCreationTimestamp="2025-11-05 15:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:50:41.717828269 +0000 UTC m=+28.470716761" watchObservedRunningTime="2025-11-05 15:50:41.719863781 +0000 UTC m=+28.472752274" Nov 5 15:50:42.645140 kubelet[2770]: E1105 15:50:42.645091 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:42.646529 kubelet[2770]: E1105 15:50:42.646486 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:43.647377 kubelet[2770]: E1105 15:50:43.647317 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:43.648959 kubelet[2770]: E1105 15:50:43.648903 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:06.870381 systemd[1]: Started sshd@7-24.144.92.23:22-139.178.68.195:34214.service - OpenSSH per-connection server daemon (139.178.68.195:34214). Nov 5 15:51:07.001451 sshd[4098]: Accepted publickey for core from 139.178.68.195 port 34214 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:07.003139 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:07.015888 systemd-logind[1571]: New session 8 of user core. Nov 5 15:51:07.023024 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:51:07.627613 sshd[4101]: Connection closed by 139.178.68.195 port 34214 Nov 5 15:51:07.628390 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:07.647427 systemd[1]: sshd@7-24.144.92.23:22-139.178.68.195:34214.service: Deactivated successfully. Nov 5 15:51:07.652925 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:51:07.656032 systemd-logind[1571]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:51:07.660743 systemd-logind[1571]: Removed session 8. Nov 5 15:51:12.643867 systemd[1]: Started sshd@8-24.144.92.23:22-139.178.68.195:34218.service - OpenSSH per-connection server daemon (139.178.68.195:34218). Nov 5 15:51:12.712765 sshd[4116]: Accepted publickey for core from 139.178.68.195 port 34218 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:12.714899 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:12.722679 systemd-logind[1571]: New session 9 of user core. Nov 5 15:51:12.729909 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:51:12.883716 sshd[4119]: Connection closed by 139.178.68.195 port 34218 Nov 5 15:51:12.884580 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:12.890261 systemd[1]: sshd@8-24.144.92.23:22-139.178.68.195:34218.service: Deactivated successfully. Nov 5 15:51:12.893636 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:51:12.895387 systemd-logind[1571]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:51:12.897918 systemd-logind[1571]: Removed session 9. Nov 5 15:51:17.899428 systemd[1]: Started sshd@9-24.144.92.23:22-139.178.68.195:49380.service - OpenSSH per-connection server daemon (139.178.68.195:49380). Nov 5 15:51:17.994444 sshd[4134]: Accepted publickey for core from 139.178.68.195 port 49380 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:17.996312 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:18.007340 systemd-logind[1571]: New session 10 of user core. Nov 5 15:51:18.011845 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:51:18.149194 sshd[4137]: Connection closed by 139.178.68.195 port 49380 Nov 5 15:51:18.149918 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:18.156040 systemd[1]: sshd@9-24.144.92.23:22-139.178.68.195:49380.service: Deactivated successfully. Nov 5 15:51:18.159613 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:51:18.161531 systemd-logind[1571]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:51:18.163514 systemd-logind[1571]: Removed session 10. Nov 5 15:51:23.166022 systemd[1]: Started sshd@10-24.144.92.23:22-139.178.68.195:53716.service - OpenSSH per-connection server daemon (139.178.68.195:53716). Nov 5 15:51:23.246522 sshd[4151]: Accepted publickey for core from 139.178.68.195 port 53716 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:23.248181 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:23.255760 systemd-logind[1571]: New session 11 of user core. Nov 5 15:51:23.258890 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:51:23.405953 sshd[4154]: Connection closed by 139.178.68.195 port 53716 Nov 5 15:51:23.406736 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:23.419037 systemd[1]: sshd@10-24.144.92.23:22-139.178.68.195:53716.service: Deactivated successfully. Nov 5 15:51:23.422256 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:51:23.423619 systemd-logind[1571]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:51:23.430104 systemd[1]: Started sshd@11-24.144.92.23:22-139.178.68.195:53730.service - OpenSSH per-connection server daemon (139.178.68.195:53730). Nov 5 15:51:23.431258 systemd-logind[1571]: Removed session 11. Nov 5 15:51:23.510641 sshd[4167]: Accepted publickey for core from 139.178.68.195 port 53730 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:23.512395 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:23.518942 systemd-logind[1571]: New session 12 of user core. Nov 5 15:51:23.530075 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:51:23.751822 sshd[4170]: Connection closed by 139.178.68.195 port 53730 Nov 5 15:51:23.753392 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:23.770437 systemd[1]: sshd@11-24.144.92.23:22-139.178.68.195:53730.service: Deactivated successfully. Nov 5 15:51:23.776993 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:51:23.779688 systemd-logind[1571]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:51:23.791909 systemd[1]: Started sshd@12-24.144.92.23:22-139.178.68.195:53742.service - OpenSSH per-connection server daemon (139.178.68.195:53742). Nov 5 15:51:23.795424 systemd-logind[1571]: Removed session 12. Nov 5 15:51:23.874289 sshd[4180]: Accepted publickey for core from 139.178.68.195 port 53742 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:23.876957 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:23.886106 systemd-logind[1571]: New session 13 of user core. Nov 5 15:51:23.889903 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:51:24.040086 sshd[4184]: Connection closed by 139.178.68.195 port 53742 Nov 5 15:51:24.041137 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:24.048519 systemd[1]: sshd@12-24.144.92.23:22-139.178.68.195:53742.service: Deactivated successfully. Nov 5 15:51:24.049002 systemd-logind[1571]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:51:24.051903 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:51:24.056280 systemd-logind[1571]: Removed session 13. Nov 5 15:51:28.451890 kubelet[2770]: E1105 15:51:28.451836 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:29.058363 systemd[1]: Started sshd@13-24.144.92.23:22-139.178.68.195:53746.service - OpenSSH per-connection server daemon (139.178.68.195:53746). Nov 5 15:51:29.126047 sshd[4195]: Accepted publickey for core from 139.178.68.195 port 53746 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:29.127640 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:29.134339 systemd-logind[1571]: New session 14 of user core. Nov 5 15:51:29.144006 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:51:29.289295 sshd[4198]: Connection closed by 139.178.68.195 port 53746 Nov 5 15:51:29.289970 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:29.296788 systemd[1]: sshd@13-24.144.92.23:22-139.178.68.195:53746.service: Deactivated successfully. Nov 5 15:51:29.301037 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:51:29.303028 systemd-logind[1571]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:51:29.306343 systemd-logind[1571]: Removed session 14. Nov 5 15:51:34.311527 systemd[1]: Started sshd@14-24.144.92.23:22-139.178.68.195:50276.service - OpenSSH per-connection server daemon (139.178.68.195:50276). Nov 5 15:51:34.381734 sshd[4209]: Accepted publickey for core from 139.178.68.195 port 50276 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:34.383845 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:34.391724 systemd-logind[1571]: New session 15 of user core. Nov 5 15:51:34.402118 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:51:34.550206 sshd[4212]: Connection closed by 139.178.68.195 port 50276 Nov 5 15:51:34.550899 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:34.563227 systemd[1]: sshd@14-24.144.92.23:22-139.178.68.195:50276.service: Deactivated successfully. Nov 5 15:51:34.566531 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:51:34.568422 systemd-logind[1571]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:51:34.572831 systemd[1]: Started sshd@15-24.144.92.23:22-139.178.68.195:50288.service - OpenSSH per-connection server daemon (139.178.68.195:50288). Nov 5 15:51:34.574139 systemd-logind[1571]: Removed session 15. Nov 5 15:51:34.636476 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 50288 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:34.638572 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:34.646151 systemd-logind[1571]: New session 16 of user core. Nov 5 15:51:34.651974 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:51:35.013092 sshd[4227]: Connection closed by 139.178.68.195 port 50288 Nov 5 15:51:35.014239 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:35.029462 systemd[1]: sshd@15-24.144.92.23:22-139.178.68.195:50288.service: Deactivated successfully. Nov 5 15:51:35.032160 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:51:35.033710 systemd-logind[1571]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:51:35.038341 systemd[1]: Started sshd@16-24.144.92.23:22-139.178.68.195:50292.service - OpenSSH per-connection server daemon (139.178.68.195:50292). Nov 5 15:51:35.042072 systemd-logind[1571]: Removed session 16. Nov 5 15:51:35.137614 sshd[4237]: Accepted publickey for core from 139.178.68.195 port 50292 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:35.139812 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:35.147701 systemd-logind[1571]: New session 17 of user core. Nov 5 15:51:35.159016 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:51:35.851680 sshd[4240]: Connection closed by 139.178.68.195 port 50292 Nov 5 15:51:35.852157 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:35.870097 systemd[1]: sshd@16-24.144.92.23:22-139.178.68.195:50292.service: Deactivated successfully. Nov 5 15:51:35.875599 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:51:35.877969 systemd-logind[1571]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:51:35.883842 systemd-logind[1571]: Removed session 17. Nov 5 15:51:35.887004 systemd[1]: Started sshd@17-24.144.92.23:22-139.178.68.195:50302.service - OpenSSH per-connection server daemon (139.178.68.195:50302). Nov 5 15:51:35.980426 sshd[4255]: Accepted publickey for core from 139.178.68.195 port 50302 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:35.982461 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:35.990824 systemd-logind[1571]: New session 18 of user core. Nov 5 15:51:35.996098 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:51:36.272992 sshd[4258]: Connection closed by 139.178.68.195 port 50302 Nov 5 15:51:36.274218 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:36.289069 systemd[1]: sshd@17-24.144.92.23:22-139.178.68.195:50302.service: Deactivated successfully. Nov 5 15:51:36.292562 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:51:36.294764 systemd-logind[1571]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:51:36.303173 systemd[1]: Started sshd@18-24.144.92.23:22-139.178.68.195:50312.service - OpenSSH per-connection server daemon (139.178.68.195:50312). Nov 5 15:51:36.305029 systemd-logind[1571]: Removed session 18. Nov 5 15:51:36.373277 sshd[4268]: Accepted publickey for core from 139.178.68.195 port 50312 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:36.375198 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:36.382608 systemd-logind[1571]: New session 19 of user core. Nov 5 15:51:36.385886 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:51:36.529762 sshd[4271]: Connection closed by 139.178.68.195 port 50312 Nov 5 15:51:36.529730 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:36.535961 systemd[1]: sshd@18-24.144.92.23:22-139.178.68.195:50312.service: Deactivated successfully. Nov 5 15:51:36.538602 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:51:36.540138 systemd-logind[1571]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:51:36.542430 systemd-logind[1571]: Removed session 19. Nov 5 15:51:38.451785 kubelet[2770]: E1105 15:51:38.451741 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:41.452117 kubelet[2770]: E1105 15:51:41.451795 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:41.545736 systemd[1]: Started sshd@19-24.144.92.23:22-139.178.68.195:50318.service - OpenSSH per-connection server daemon (139.178.68.195:50318). Nov 5 15:51:41.614364 sshd[4289]: Accepted publickey for core from 139.178.68.195 port 50318 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:41.616021 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:41.621875 systemd-logind[1571]: New session 20 of user core. Nov 5 15:51:41.628990 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:51:41.761864 sshd[4292]: Connection closed by 139.178.68.195 port 50318 Nov 5 15:51:41.762833 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:41.767898 systemd[1]: sshd@19-24.144.92.23:22-139.178.68.195:50318.service: Deactivated successfully. Nov 5 15:51:41.770601 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:51:41.774006 systemd-logind[1571]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:51:41.775013 systemd-logind[1571]: Removed session 20. Nov 5 15:51:46.778508 systemd[1]: Started sshd@20-24.144.92.23:22-139.178.68.195:36572.service - OpenSSH per-connection server daemon (139.178.68.195:36572). Nov 5 15:51:46.844330 sshd[4303]: Accepted publickey for core from 139.178.68.195 port 36572 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:46.846275 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:46.851814 systemd-logind[1571]: New session 21 of user core. Nov 5 15:51:46.860136 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:51:46.998682 sshd[4306]: Connection closed by 139.178.68.195 port 36572 Nov 5 15:51:46.999415 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:47.006073 systemd[1]: sshd@20-24.144.92.23:22-139.178.68.195:36572.service: Deactivated successfully. Nov 5 15:51:47.010442 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:51:47.012091 systemd-logind[1571]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:51:47.014489 systemd-logind[1571]: Removed session 21. Nov 5 15:51:47.454997 kubelet[2770]: E1105 15:51:47.453467 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:48.451844 kubelet[2770]: E1105 15:51:48.451785 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:49.452674 kubelet[2770]: E1105 15:51:49.452150 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:52.013063 systemd[1]: Started sshd@21-24.144.92.23:22-139.178.68.195:36574.service - OpenSSH per-connection server daemon (139.178.68.195:36574). Nov 5 15:51:52.080015 sshd[4320]: Accepted publickey for core from 139.178.68.195 port 36574 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:52.082573 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:52.089355 systemd-logind[1571]: New session 22 of user core. Nov 5 15:51:52.097964 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:51:52.241667 sshd[4323]: Connection closed by 139.178.68.195 port 36574 Nov 5 15:51:52.242232 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:52.253073 systemd[1]: sshd@21-24.144.92.23:22-139.178.68.195:36574.service: Deactivated successfully. Nov 5 15:51:52.256126 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:51:52.257617 systemd-logind[1571]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:51:52.261553 systemd[1]: Started sshd@22-24.144.92.23:22-139.178.68.195:36578.service - OpenSSH per-connection server daemon (139.178.68.195:36578). Nov 5 15:51:52.265776 systemd-logind[1571]: Removed session 22. Nov 5 15:51:52.347232 sshd[4334]: Accepted publickey for core from 139.178.68.195 port 36578 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:52.354269 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:52.361299 systemd-logind[1571]: New session 23 of user core. Nov 5 15:51:52.367896 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:51:53.854626 containerd[1604]: time="2025-11-05T15:51:53.854545750Z" level=info msg="StopContainer for \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" with timeout 30 (s)" Nov 5 15:51:53.857382 containerd[1604]: time="2025-11-05T15:51:53.856983117Z" level=info msg="Stop container \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" with signal terminated" Nov 5 15:51:53.913898 containerd[1604]: time="2025-11-05T15:51:53.913826162Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:51:53.925910 systemd[1]: cri-containerd-e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3.scope: Deactivated successfully. Nov 5 15:51:53.928700 containerd[1604]: time="2025-11-05T15:51:53.927187096Z" level=info msg="received exit event container_id:\"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" id:\"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" pid:3334 exited_at:{seconds:1762357913 nanos:926850405}" Nov 5 15:51:53.928700 containerd[1604]: time="2025-11-05T15:51:53.927541130Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" id:\"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" pid:3334 exited_at:{seconds:1762357913 nanos:926850405}" Nov 5 15:51:53.939547 containerd[1604]: time="2025-11-05T15:51:53.939499746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" id:\"fc48b315d9647fa82b23253035ad1938725ebbb2368af9accdc6e6ba0ea46cee\" pid:4357 exited_at:{seconds:1762357913 nanos:939146239}" Nov 5 15:51:53.946136 containerd[1604]: time="2025-11-05T15:51:53.945907553Z" level=info msg="StopContainer for \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" with timeout 2 (s)" Nov 5 15:51:53.946859 containerd[1604]: time="2025-11-05T15:51:53.946836256Z" level=info msg="Stop container \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" with signal terminated" Nov 5 15:51:53.959987 systemd-networkd[1499]: lxc_health: Link DOWN Nov 5 15:51:53.959999 systemd-networkd[1499]: lxc_health: Lost carrier Nov 5 15:51:54.003481 containerd[1604]: time="2025-11-05T15:51:54.003130696Z" level=info msg="received exit event container_id:\"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" id:\"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" pid:3426 exited_at:{seconds:1762357914 nanos:1521546}" Nov 5 15:51:54.003481 containerd[1604]: time="2025-11-05T15:51:54.003439955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" id:\"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" pid:3426 exited_at:{seconds:1762357914 nanos:1521546}" Nov 5 15:51:54.003914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3-rootfs.mount: Deactivated successfully. Nov 5 15:51:54.007267 systemd[1]: cri-containerd-61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab.scope: Deactivated successfully. Nov 5 15:51:54.008000 systemd[1]: cri-containerd-61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab.scope: Consumed 7.682s CPU time, 198M memory peak, 73.8M read from disk, 13.3M written to disk. Nov 5 15:51:54.013819 containerd[1604]: time="2025-11-05T15:51:54.013628702Z" level=info msg="StopContainer for \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" returns successfully" Nov 5 15:51:54.016237 containerd[1604]: time="2025-11-05T15:51:54.016133622Z" level=info msg="StopPodSandbox for \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\"" Nov 5 15:51:54.016831 containerd[1604]: time="2025-11-05T15:51:54.016619756Z" level=info msg="Container to stop \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:51:54.028468 systemd[1]: cri-containerd-dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1.scope: Deactivated successfully. Nov 5 15:51:54.041225 containerd[1604]: time="2025-11-05T15:51:54.041163317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" id:\"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" pid:2989 exit_status:137 exited_at:{seconds:1762357914 nanos:40068720}" Nov 5 15:51:54.043167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab-rootfs.mount: Deactivated successfully. Nov 5 15:51:54.050709 containerd[1604]: time="2025-11-05T15:51:54.050465642Z" level=info msg="StopContainer for \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" returns successfully" Nov 5 15:51:54.051657 containerd[1604]: time="2025-11-05T15:51:54.051609958Z" level=info msg="StopPodSandbox for \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\"" Nov 5 15:51:54.051762 containerd[1604]: time="2025-11-05T15:51:54.051698368Z" level=info msg="Container to stop \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:51:54.051762 containerd[1604]: time="2025-11-05T15:51:54.051712199Z" level=info msg="Container to stop \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:51:54.051762 containerd[1604]: time="2025-11-05T15:51:54.051720720Z" level=info msg="Container to stop \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:51:54.051762 containerd[1604]: time="2025-11-05T15:51:54.051728547Z" level=info msg="Container to stop \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:51:54.051762 containerd[1604]: time="2025-11-05T15:51:54.051736100Z" level=info msg="Container to stop \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:51:54.062671 systemd[1]: cri-containerd-df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6.scope: Deactivated successfully. Nov 5 15:51:54.097081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1-rootfs.mount: Deactivated successfully. Nov 5 15:51:54.105675 containerd[1604]: time="2025-11-05T15:51:54.105534483Z" level=info msg="shim disconnected" id=dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1 namespace=k8s.io Nov 5 15:51:54.105675 containerd[1604]: time="2025-11-05T15:51:54.105572709Z" level=warning msg="cleaning up after shim disconnected" id=dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1 namespace=k8s.io Nov 5 15:51:54.105675 containerd[1604]: time="2025-11-05T15:51:54.105581255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 15:51:54.111151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6-rootfs.mount: Deactivated successfully. Nov 5 15:51:54.130253 containerd[1604]: time="2025-11-05T15:51:54.130210117Z" level=info msg="shim disconnected" id=df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6 namespace=k8s.io Nov 5 15:51:54.131066 containerd[1604]: time="2025-11-05T15:51:54.130452169Z" level=warning msg="cleaning up after shim disconnected" id=df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6 namespace=k8s.io Nov 5 15:51:54.131066 containerd[1604]: time="2025-11-05T15:51:54.130487070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 15:51:54.138622 containerd[1604]: time="2025-11-05T15:51:54.138573813Z" level=error msg="Failed to handle event container_id:\"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" id:\"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" pid:2989 exit_status:137 exited_at:{seconds:1762357914 nanos:40068720} for dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Nov 5 15:51:54.138896 containerd[1604]: time="2025-11-05T15:51:54.138862008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" id:\"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" pid:2919 exit_status:137 exited_at:{seconds:1762357914 nanos:69981134}" Nov 5 15:51:54.140972 containerd[1604]: time="2025-11-05T15:51:54.140935486Z" level=info msg="received exit event sandbox_id:\"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" exit_status:137 exited_at:{seconds:1762357914 nanos:69981134}" Nov 5 15:51:54.142812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6-shm.mount: Deactivated successfully. Nov 5 15:51:54.145073 containerd[1604]: time="2025-11-05T15:51:54.141168384Z" level=info msg="received exit event sandbox_id:\"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" exit_status:137 exited_at:{seconds:1762357914 nanos:40068720}" Nov 5 15:51:54.145565 containerd[1604]: time="2025-11-05T15:51:54.141395341Z" level=info msg="TearDown network for sandbox \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" successfully" Nov 5 15:51:54.145565 containerd[1604]: time="2025-11-05T15:51:54.145364944Z" level=info msg="StopPodSandbox for \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" returns successfully" Nov 5 15:51:54.145565 containerd[1604]: time="2025-11-05T15:51:54.145517843Z" level=info msg="TearDown network for sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" successfully" Nov 5 15:51:54.145565 containerd[1604]: time="2025-11-05T15:51:54.145528526Z" level=info msg="StopPodSandbox for \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" returns successfully" Nov 5 15:51:54.245293 kubelet[2770]: I1105 15:51:54.245235 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-hostproc\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.246800 kubelet[2770]: I1105 15:51:54.245818 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-lib-modules\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.246800 kubelet[2770]: I1105 15:51:54.245836 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-hostproc" (OuterVolumeSpecName: "hostproc") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.246800 kubelet[2770]: I1105 15:51:54.245889 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.246800 kubelet[2770]: I1105 15:51:54.245856 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-etc-cni-netd\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.246800 kubelet[2770]: I1105 15:51:54.245911 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.247012 kubelet[2770]: I1105 15:51:54.245951 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-config-path\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247012 kubelet[2770]: I1105 15:51:54.245989 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10137070-5223-4ae1-9532-e98b1ec3284f-hubble-tls\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247012 kubelet[2770]: I1105 15:51:54.246973 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cni-path\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247012 kubelet[2770]: I1105 15:51:54.247003 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-run\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247138 kubelet[2770]: I1105 15:51:54.247031 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-xtables-lock\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247138 kubelet[2770]: I1105 15:51:54.247048 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-host-proc-sys-net\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247138 kubelet[2770]: I1105 15:51:54.247064 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-bpf-maps\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247815 kubelet[2770]: I1105 15:51:54.247789 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-cgroup\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247925 kubelet[2770]: I1105 15:51:54.247826 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10137070-5223-4ae1-9532-e98b1ec3284f-clustermesh-secrets\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247925 kubelet[2770]: I1105 15:51:54.247840 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-host-proc-sys-kernel\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247925 kubelet[2770]: I1105 15:51:54.247859 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wb7r\" (UniqueName: \"kubernetes.io/projected/10137070-5223-4ae1-9532-e98b1ec3284f-kube-api-access-9wb7r\") pod \"10137070-5223-4ae1-9532-e98b1ec3284f\" (UID: \"10137070-5223-4ae1-9532-e98b1ec3284f\") " Nov 5 15:51:54.247925 kubelet[2770]: I1105 15:51:54.247875 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmjv4\" (UniqueName: \"kubernetes.io/projected/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a-kube-api-access-gmjv4\") pod \"09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a\" (UID: \"09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a\") " Nov 5 15:51:54.247925 kubelet[2770]: I1105 15:51:54.247890 2770 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a-cilium-config-path\") pod \"09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a\" (UID: \"09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a\") " Nov 5 15:51:54.248071 kubelet[2770]: I1105 15:51:54.247945 2770 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-hostproc\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.248071 kubelet[2770]: I1105 15:51:54.247955 2770 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-lib-modules\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.248071 kubelet[2770]: I1105 15:51:54.247966 2770 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-etc-cni-netd\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.248832 kubelet[2770]: I1105 15:51:54.248707 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cni-path" (OuterVolumeSpecName: "cni-path") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.248905 kubelet[2770]: I1105 15:51:54.248748 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.248905 kubelet[2770]: I1105 15:51:54.248866 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.248905 kubelet[2770]: I1105 15:51:54.248889 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.249019 kubelet[2770]: I1105 15:51:54.248905 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.249019 kubelet[2770]: I1105 15:51:54.248918 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.251057 kubelet[2770]: I1105 15:51:54.250729 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:51:54.253458 kubelet[2770]: I1105 15:51:54.253415 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a" (UID: "09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:51:54.255956 kubelet[2770]: I1105 15:51:54.255914 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:51:54.265282 kubelet[2770]: I1105 15:51:54.265206 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10137070-5223-4ae1-9532-e98b1ec3284f-kube-api-access-9wb7r" (OuterVolumeSpecName: "kube-api-access-9wb7r") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "kube-api-access-9wb7r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:51:54.266605 kubelet[2770]: I1105 15:51:54.266554 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10137070-5223-4ae1-9532-e98b1ec3284f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:51:54.267241 kubelet[2770]: I1105 15:51:54.267205 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a-kube-api-access-gmjv4" (OuterVolumeSpecName: "kube-api-access-gmjv4") pod "09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a" (UID: "09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a"). InnerVolumeSpecName "kube-api-access-gmjv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:51:54.267381 kubelet[2770]: I1105 15:51:54.267355 2770 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10137070-5223-4ae1-9532-e98b1ec3284f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10137070-5223-4ae1-9532-e98b1ec3284f" (UID: "10137070-5223-4ae1-9532-e98b1ec3284f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:51:54.349253 kubelet[2770]: I1105 15:51:54.349183 2770 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-bpf-maps\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349253 kubelet[2770]: I1105 15:51:54.349219 2770 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-cgroup\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349253 kubelet[2770]: I1105 15:51:54.349229 2770 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10137070-5223-4ae1-9532-e98b1ec3284f-clustermesh-secrets\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349253 kubelet[2770]: I1105 15:51:54.349238 2770 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-host-proc-sys-kernel\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349253 kubelet[2770]: I1105 15:51:54.349248 2770 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9wb7r\" (UniqueName: \"kubernetes.io/projected/10137070-5223-4ae1-9532-e98b1ec3284f-kube-api-access-9wb7r\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349253 kubelet[2770]: I1105 15:51:54.349258 2770 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gmjv4\" (UniqueName: \"kubernetes.io/projected/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a-kube-api-access-gmjv4\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349549 kubelet[2770]: I1105 15:51:54.349292 2770 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a-cilium-config-path\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349549 kubelet[2770]: I1105 15:51:54.349303 2770 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-config-path\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349549 kubelet[2770]: I1105 15:51:54.349311 2770 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10137070-5223-4ae1-9532-e98b1ec3284f-hubble-tls\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349549 kubelet[2770]: I1105 15:51:54.349318 2770 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cni-path\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349549 kubelet[2770]: I1105 15:51:54.349326 2770 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-cilium-run\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349549 kubelet[2770]: I1105 15:51:54.349334 2770 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-xtables-lock\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.349549 kubelet[2770]: I1105 15:51:54.349343 2770 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10137070-5223-4ae1-9532-e98b1ec3284f-host-proc-sys-net\") on node \"ci-4487.0.1-5-f7907e7d84\" DevicePath \"\"" Nov 5 15:51:54.842767 kubelet[2770]: I1105 15:51:54.842718 2770 scope.go:117] "RemoveContainer" containerID="61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab" Nov 5 15:51:54.850919 containerd[1604]: time="2025-11-05T15:51:54.850876237Z" level=info msg="RemoveContainer for \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\"" Nov 5 15:51:54.857122 systemd[1]: Removed slice kubepods-burstable-pod10137070_5223_4ae1_9532_e98b1ec3284f.slice - libcontainer container kubepods-burstable-pod10137070_5223_4ae1_9532_e98b1ec3284f.slice. Nov 5 15:51:54.858041 systemd[1]: kubepods-burstable-pod10137070_5223_4ae1_9532_e98b1ec3284f.slice: Consumed 7.812s CPU time, 198.3M memory peak, 74.9M read from disk, 13.3M written to disk. Nov 5 15:51:54.865191 systemd[1]: Removed slice kubepods-besteffort-pod09f1bbf5_fb38_49ef_80ed_eec3e64a2f8a.slice - libcontainer container kubepods-besteffort-pod09f1bbf5_fb38_49ef_80ed_eec3e64a2f8a.slice. Nov 5 15:51:54.866989 containerd[1604]: time="2025-11-05T15:51:54.866929713Z" level=info msg="RemoveContainer for \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" returns successfully" Nov 5 15:51:54.869072 kubelet[2770]: I1105 15:51:54.867973 2770 scope.go:117] "RemoveContainer" containerID="14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e" Nov 5 15:51:54.870828 containerd[1604]: time="2025-11-05T15:51:54.870776659Z" level=info msg="RemoveContainer for \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\"" Nov 5 15:51:54.879051 containerd[1604]: time="2025-11-05T15:51:54.878993011Z" level=info msg="RemoveContainer for \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\" returns successfully" Nov 5 15:51:54.880671 kubelet[2770]: I1105 15:51:54.879548 2770 scope.go:117] "RemoveContainer" containerID="76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c" Nov 5 15:51:54.885203 containerd[1604]: time="2025-11-05T15:51:54.885162048Z" level=info msg="RemoveContainer for \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\"" Nov 5 15:51:54.891144 containerd[1604]: time="2025-11-05T15:51:54.891028748Z" level=info msg="RemoveContainer for \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\" returns successfully" Nov 5 15:51:54.891568 kubelet[2770]: I1105 15:51:54.891542 2770 scope.go:117] "RemoveContainer" containerID="810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6" Nov 5 15:51:54.894015 containerd[1604]: time="2025-11-05T15:51:54.893974614Z" level=info msg="RemoveContainer for \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\"" Nov 5 15:51:54.898756 containerd[1604]: time="2025-11-05T15:51:54.898292785Z" level=info msg="RemoveContainer for \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\" returns successfully" Nov 5 15:51:54.900252 kubelet[2770]: I1105 15:51:54.900064 2770 scope.go:117] "RemoveContainer" containerID="b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5" Nov 5 15:51:54.907783 containerd[1604]: time="2025-11-05T15:51:54.907738135Z" level=info msg="RemoveContainer for \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\"" Nov 5 15:51:54.911831 containerd[1604]: time="2025-11-05T15:51:54.911774013Z" level=info msg="RemoveContainer for \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\" returns successfully" Nov 5 15:51:54.912324 kubelet[2770]: I1105 15:51:54.912288 2770 scope.go:117] "RemoveContainer" containerID="61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab" Nov 5 15:51:54.912896 containerd[1604]: time="2025-11-05T15:51:54.912845565Z" level=error msg="ContainerStatus for \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\": not found" Nov 5 15:51:54.913295 kubelet[2770]: E1105 15:51:54.913243 2770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\": not found" containerID="61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab" Nov 5 15:51:54.913420 kubelet[2770]: I1105 15:51:54.913301 2770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab"} err="failed to get container status \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\": rpc error: code = NotFound desc = an error occurred when try to find container \"61459d1d4e6382d7b1cb60caf28d88a59d441691aab78114069c46956344ffab\": not found" Nov 5 15:51:54.913420 kubelet[2770]: I1105 15:51:54.913343 2770 scope.go:117] "RemoveContainer" containerID="14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e" Nov 5 15:51:54.913632 containerd[1604]: time="2025-11-05T15:51:54.913602112Z" level=error msg="ContainerStatus for \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\": not found" Nov 5 15:51:54.913832 kubelet[2770]: E1105 15:51:54.913767 2770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\": not found" containerID="14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e" Nov 5 15:51:54.913832 kubelet[2770]: I1105 15:51:54.913793 2770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e"} err="failed to get container status \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\": rpc error: code = NotFound desc = an error occurred when try to find container \"14ba71b13cf0f2df683ae2251669f3e535bc2a350aa533cb6053ea23a96b056e\": not found" Nov 5 15:51:54.913832 kubelet[2770]: I1105 15:51:54.913822 2770 scope.go:117] "RemoveContainer" containerID="76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c" Nov 5 15:51:54.914231 containerd[1604]: time="2025-11-05T15:51:54.914188623Z" level=error msg="ContainerStatus for \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\": not found" Nov 5 15:51:54.914586 kubelet[2770]: E1105 15:51:54.914560 2770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\": not found" containerID="76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c" Nov 5 15:51:54.914586 kubelet[2770]: I1105 15:51:54.914585 2770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c"} err="failed to get container status \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\": rpc error: code = NotFound desc = an error occurred when try to find container \"76141d83fda743a113c940073d1442ed93c1039fc3dd3b126171f7997f93a97c\": not found" Nov 5 15:51:54.914793 kubelet[2770]: I1105 15:51:54.914601 2770 scope.go:117] "RemoveContainer" containerID="810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6" Nov 5 15:51:54.915030 containerd[1604]: time="2025-11-05T15:51:54.914991429Z" level=error msg="ContainerStatus for \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\": not found" Nov 5 15:51:54.915364 kubelet[2770]: E1105 15:51:54.915309 2770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\": not found" containerID="810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6" Nov 5 15:51:54.915364 kubelet[2770]: I1105 15:51:54.915355 2770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6"} err="failed to get container status \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"810576cb9318729d462d8df3fa1635c1a1f90892bd338dce9b5bf934fc7f19a6\": not found" Nov 5 15:51:54.915609 kubelet[2770]: I1105 15:51:54.915370 2770 scope.go:117] "RemoveContainer" containerID="b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5" Nov 5 15:51:54.915710 containerd[1604]: time="2025-11-05T15:51:54.915591434Z" level=error msg="ContainerStatus for \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\": not found" Nov 5 15:51:54.915934 kubelet[2770]: E1105 15:51:54.915907 2770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\": not found" containerID="b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5" Nov 5 15:51:54.916061 kubelet[2770]: I1105 15:51:54.916036 2770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5"} err="failed to get container status \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9cee176b022c0d5c4c29baf07488cbdaec61893bc3986a12bfe03655e571ec5\": not found" Nov 5 15:51:54.916144 kubelet[2770]: I1105 15:51:54.916129 2770 scope.go:117] "RemoveContainer" containerID="e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3" Nov 5 15:51:54.918126 containerd[1604]: time="2025-11-05T15:51:54.918084532Z" level=info msg="RemoveContainer for \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\"" Nov 5 15:51:54.922003 containerd[1604]: time="2025-11-05T15:51:54.921316619Z" level=info msg="RemoveContainer for \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" returns successfully" Nov 5 15:51:54.922003 containerd[1604]: time="2025-11-05T15:51:54.921909930Z" level=error msg="ContainerStatus for \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\": not found" Nov 5 15:51:54.922204 kubelet[2770]: I1105 15:51:54.921616 2770 scope.go:117] "RemoveContainer" containerID="e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3" Nov 5 15:51:54.922456 kubelet[2770]: E1105 15:51:54.922425 2770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\": not found" containerID="e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3" Nov 5 15:51:54.922533 kubelet[2770]: I1105 15:51:54.922459 2770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3"} err="failed to get container status \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"e91a5581dee886f50f3bf542e9459ab50a673d587a6bb53ad8811f0d7c4349e3\": not found" Nov 5 15:51:54.998498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1-shm.mount: Deactivated successfully. Nov 5 15:51:54.999147 systemd[1]: var-lib-kubelet-pods-09f1bbf5\x2dfb38\x2d49ef\x2d80ed\x2deec3e64a2f8a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmjv4.mount: Deactivated successfully. Nov 5 15:51:54.999337 systemd[1]: var-lib-kubelet-pods-10137070\x2d5223\x2d4ae1\x2d9532\x2de98b1ec3284f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9wb7r.mount: Deactivated successfully. Nov 5 15:51:54.999485 systemd[1]: var-lib-kubelet-pods-10137070\x2d5223\x2d4ae1\x2d9532\x2de98b1ec3284f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 5 15:51:54.999696 systemd[1]: var-lib-kubelet-pods-10137070\x2d5223\x2d4ae1\x2d9532\x2de98b1ec3284f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 5 15:51:55.455323 kubelet[2770]: I1105 15:51:55.455227 2770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a" path="/var/lib/kubelet/pods/09f1bbf5-fb38-49ef-80ed-eec3e64a2f8a/volumes" Nov 5 15:51:55.456610 kubelet[2770]: I1105 15:51:55.456067 2770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10137070-5223-4ae1-9532-e98b1ec3284f" path="/var/lib/kubelet/pods/10137070-5223-4ae1-9532-e98b1ec3284f/volumes" Nov 5 15:51:55.794703 sshd[4337]: Connection closed by 139.178.68.195 port 36578 Nov 5 15:51:55.795481 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:55.810455 systemd[1]: sshd@22-24.144.92.23:22-139.178.68.195:36578.service: Deactivated successfully. Nov 5 15:51:55.814798 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:51:55.817786 systemd-logind[1571]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:51:55.822740 systemd[1]: Started sshd@23-24.144.92.23:22-139.178.68.195:39022.service - OpenSSH per-connection server daemon (139.178.68.195:39022). Nov 5 15:51:55.824641 systemd-logind[1571]: Removed session 23. Nov 5 15:51:55.853840 containerd[1604]: time="2025-11-05T15:51:55.853776879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" id:\"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" pid:2989 exit_status:137 exited_at:{seconds:1762357914 nanos:40068720}" Nov 5 15:51:55.917354 sshd[4486]: Accepted publickey for core from 139.178.68.195 port 39022 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:55.919156 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:55.925364 systemd-logind[1571]: New session 24 of user core. Nov 5 15:51:55.937070 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:51:57.457353 sshd[4489]: Connection closed by 139.178.68.195 port 39022 Nov 5 15:51:57.457060 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:57.473310 systemd[1]: sshd@23-24.144.92.23:22-139.178.68.195:39022.service: Deactivated successfully. Nov 5 15:51:57.477721 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:51:57.478328 systemd[1]: session-24.scope: Consumed 1.358s CPU time, 27.4M memory peak. Nov 5 15:51:57.479584 systemd-logind[1571]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:51:57.491128 systemd[1]: Started sshd@24-24.144.92.23:22-139.178.68.195:39038.service - OpenSSH per-connection server daemon (139.178.68.195:39038). Nov 5 15:51:57.493428 systemd-logind[1571]: Removed session 24. Nov 5 15:51:57.525314 systemd[1]: Created slice kubepods-burstable-pod4ad1de3c_55ac_45dd_a74c_74f917fe5bff.slice - libcontainer container kubepods-burstable-pod4ad1de3c_55ac_45dd_a74c_74f917fe5bff.slice. Nov 5 15:51:57.590582 sshd[4499]: Accepted publickey for core from 139.178.68.195 port 39038 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:57.591243 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:57.597630 systemd-logind[1571]: New session 25 of user core. Nov 5 15:51:57.604214 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:51:57.665629 sshd[4502]: Connection closed by 139.178.68.195 port 39038 Nov 5 15:51:57.666545 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:57.669865 kubelet[2770]: I1105 15:51:57.669786 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-xtables-lock\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670672 kubelet[2770]: I1105 15:51:57.670259 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-lib-modules\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670672 kubelet[2770]: I1105 15:51:57.670286 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-cilium-ipsec-secrets\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670672 kubelet[2770]: I1105 15:51:57.670303 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-host-proc-sys-kernel\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670672 kubelet[2770]: I1105 15:51:57.670322 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-cilium-run\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670672 kubelet[2770]: I1105 15:51:57.670337 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-cilium-cgroup\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670672 kubelet[2770]: I1105 15:51:57.670376 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-hubble-tls\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670942 kubelet[2770]: I1105 15:51:57.670403 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-hostproc\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670942 kubelet[2770]: I1105 15:51:57.670428 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-cni-path\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670942 kubelet[2770]: I1105 15:51:57.670447 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-clustermesh-secrets\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670942 kubelet[2770]: I1105 15:51:57.670466 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-cilium-config-path\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670942 kubelet[2770]: I1105 15:51:57.670484 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-host-proc-sys-net\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.670942 kubelet[2770]: I1105 15:51:57.670506 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-bpf-maps\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.671085 kubelet[2770]: I1105 15:51:57.670526 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-etc-cni-netd\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.671085 kubelet[2770]: I1105 15:51:57.670548 2770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttbc\" (UniqueName: \"kubernetes.io/projected/4ad1de3c-55ac-45dd-a74c-74f917fe5bff-kube-api-access-dttbc\") pod \"cilium-xcmns\" (UID: \"4ad1de3c-55ac-45dd-a74c-74f917fe5bff\") " pod="kube-system/cilium-xcmns" Nov 5 15:51:57.677807 systemd[1]: sshd@24-24.144.92.23:22-139.178.68.195:39038.service: Deactivated successfully. Nov 5 15:51:57.680467 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:51:57.681576 systemd-logind[1571]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:51:57.686185 systemd[1]: Started sshd@25-24.144.92.23:22-139.178.68.195:39040.service - OpenSSH per-connection server daemon (139.178.68.195:39040). Nov 5 15:51:57.687302 systemd-logind[1571]: Removed session 25. Nov 5 15:51:57.762791 sshd[4509]: Accepted publickey for core from 139.178.68.195 port 39040 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:51:57.765436 sshd-session[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:57.771163 systemd-logind[1571]: New session 26 of user core. Nov 5 15:51:57.777911 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:51:57.831231 kubelet[2770]: E1105 15:51:57.831146 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:57.833168 containerd[1604]: time="2025-11-05T15:51:57.833129823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xcmns,Uid:4ad1de3c-55ac-45dd-a74c-74f917fe5bff,Namespace:kube-system,Attempt:0,}" Nov 5 15:51:57.854759 containerd[1604]: time="2025-11-05T15:51:57.854129399Z" level=info msg="connecting to shim 0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc" address="unix:///run/containerd/s/6fcd87586100450cb0ee920129893e14250cce2e4216bfdd7757b2adc5dc70a1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:57.901959 systemd[1]: Started cri-containerd-0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc.scope - libcontainer container 0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc. Nov 5 15:51:57.948870 containerd[1604]: time="2025-11-05T15:51:57.948782260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xcmns,Uid:4ad1de3c-55ac-45dd-a74c-74f917fe5bff,Namespace:kube-system,Attempt:0,} returns sandbox id \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\"" Nov 5 15:51:57.950873 kubelet[2770]: E1105 15:51:57.950573 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:57.958667 containerd[1604]: time="2025-11-05T15:51:57.958597404Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 15:51:57.974947 containerd[1604]: time="2025-11-05T15:51:57.974767009Z" level=info msg="Container 5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:57.979033 containerd[1604]: time="2025-11-05T15:51:57.978985357Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b\"" Nov 5 15:51:57.980487 containerd[1604]: time="2025-11-05T15:51:57.980376471Z" level=info msg="StartContainer for \"5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b\"" Nov 5 15:51:57.981771 containerd[1604]: time="2025-11-05T15:51:57.981698470Z" level=info msg="connecting to shim 5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b" address="unix:///run/containerd/s/6fcd87586100450cb0ee920129893e14250cce2e4216bfdd7757b2adc5dc70a1" protocol=ttrpc version=3 Nov 5 15:51:58.002918 systemd[1]: Started cri-containerd-5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b.scope - libcontainer container 5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b. Nov 5 15:51:58.046429 containerd[1604]: time="2025-11-05T15:51:58.046264167Z" level=info msg="StartContainer for \"5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b\" returns successfully" Nov 5 15:51:58.074384 systemd[1]: cri-containerd-5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b.scope: Deactivated successfully. Nov 5 15:51:58.074759 systemd[1]: cri-containerd-5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b.scope: Consumed 27ms CPU time, 9.4M memory peak, 2.7M read from disk. Nov 5 15:51:58.078503 containerd[1604]: time="2025-11-05T15:51:58.078412156Z" level=info msg="received exit event container_id:\"5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b\" id:\"5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b\" pid:4580 exited_at:{seconds:1762357918 nanos:77606954}" Nov 5 15:51:58.078904 containerd[1604]: time="2025-11-05T15:51:58.078772288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b\" id:\"5af479d3b54b56933ce23fe1fea89c573bf6cf10021c0f273d03a2286592794b\" pid:4580 exited_at:{seconds:1762357918 nanos:77606954}" Nov 5 15:51:58.587159 kubelet[2770]: E1105 15:51:58.587113 2770 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 5 15:51:58.784663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286140686.mount: Deactivated successfully. Nov 5 15:51:58.868379 kubelet[2770]: E1105 15:51:58.868326 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:58.875962 containerd[1604]: time="2025-11-05T15:51:58.875894972Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 15:51:58.892760 containerd[1604]: time="2025-11-05T15:51:58.889190029Z" level=info msg="Container 2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:58.891203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455417803.mount: Deactivated successfully. Nov 5 15:51:58.898308 containerd[1604]: time="2025-11-05T15:51:58.898245990Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535\"" Nov 5 15:51:58.901601 containerd[1604]: time="2025-11-05T15:51:58.901555348Z" level=info msg="StartContainer for \"2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535\"" Nov 5 15:51:58.903311 containerd[1604]: time="2025-11-05T15:51:58.903252028Z" level=info msg="connecting to shim 2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535" address="unix:///run/containerd/s/6fcd87586100450cb0ee920129893e14250cce2e4216bfdd7757b2adc5dc70a1" protocol=ttrpc version=3 Nov 5 15:51:58.933033 systemd[1]: Started cri-containerd-2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535.scope - libcontainer container 2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535. Nov 5 15:51:58.968141 containerd[1604]: time="2025-11-05T15:51:58.968104820Z" level=info msg="StartContainer for \"2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535\" returns successfully" Nov 5 15:51:58.983219 systemd[1]: cri-containerd-2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535.scope: Deactivated successfully. Nov 5 15:51:58.983711 systemd[1]: cri-containerd-2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535.scope: Consumed 24ms CPU time, 7M memory peak, 1.7M read from disk. Nov 5 15:51:58.986398 containerd[1604]: time="2025-11-05T15:51:58.986187183Z" level=info msg="received exit event container_id:\"2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535\" id:\"2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535\" pid:4624 exited_at:{seconds:1762357918 nanos:985871189}" Nov 5 15:51:58.986844 containerd[1604]: time="2025-11-05T15:51:58.986818696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535\" id:\"2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535\" pid:4624 exited_at:{seconds:1762357918 nanos:985871189}" Nov 5 15:51:59.022168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c56d79d628639c700ca0ed423f959db0984b75b077e21548b010c493e148535-rootfs.mount: Deactivated successfully. Nov 5 15:51:59.879621 kubelet[2770]: E1105 15:51:59.878376 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:59.888410 containerd[1604]: time="2025-11-05T15:51:59.888352125Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 15:51:59.906958 containerd[1604]: time="2025-11-05T15:51:59.906909048Z" level=info msg="Container ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:59.914791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761018106.mount: Deactivated successfully. Nov 5 15:51:59.926057 containerd[1604]: time="2025-11-05T15:51:59.925999050Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24\"" Nov 5 15:51:59.927353 containerd[1604]: time="2025-11-05T15:51:59.927320768Z" level=info msg="StartContainer for \"ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24\"" Nov 5 15:51:59.930704 containerd[1604]: time="2025-11-05T15:51:59.930665808Z" level=info msg="connecting to shim ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24" address="unix:///run/containerd/s/6fcd87586100450cb0ee920129893e14250cce2e4216bfdd7757b2adc5dc70a1" protocol=ttrpc version=3 Nov 5 15:51:59.965921 systemd[1]: Started cri-containerd-ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24.scope - libcontainer container ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24. Nov 5 15:52:00.079388 containerd[1604]: time="2025-11-05T15:52:00.079324443Z" level=info msg="StartContainer for \"ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24\" returns successfully" Nov 5 15:52:00.099156 systemd[1]: cri-containerd-ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24.scope: Deactivated successfully. Nov 5 15:52:00.107448 containerd[1604]: time="2025-11-05T15:52:00.107105247Z" level=info msg="received exit event container_id:\"ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24\" id:\"ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24\" pid:4671 exited_at:{seconds:1762357920 nanos:106590456}" Nov 5 15:52:00.107448 containerd[1604]: time="2025-11-05T15:52:00.107386378Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24\" id:\"ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24\" pid:4671 exited_at:{seconds:1762357920 nanos:106590456}" Nov 5 15:52:00.176202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed55b312fc85da4089ff273000a1310211d909a8d51110fe5e163a0f7d2b1e24-rootfs.mount: Deactivated successfully. Nov 5 15:52:00.885770 kubelet[2770]: E1105 15:52:00.885724 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:00.896203 containerd[1604]: time="2025-11-05T15:52:00.896135861Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 15:52:00.912716 containerd[1604]: time="2025-11-05T15:52:00.912013766Z" level=info msg="Container f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:00.917296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521780223.mount: Deactivated successfully. Nov 5 15:52:00.926464 containerd[1604]: time="2025-11-05T15:52:00.926332847Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0\"" Nov 5 15:52:00.927802 containerd[1604]: time="2025-11-05T15:52:00.927517197Z" level=info msg="StartContainer for \"f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0\"" Nov 5 15:52:00.930107 containerd[1604]: time="2025-11-05T15:52:00.930021120Z" level=info msg="connecting to shim f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0" address="unix:///run/containerd/s/6fcd87586100450cb0ee920129893e14250cce2e4216bfdd7757b2adc5dc70a1" protocol=ttrpc version=3 Nov 5 15:52:00.961109 systemd[1]: Started cri-containerd-f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0.scope - libcontainer container f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0. Nov 5 15:52:01.004564 systemd[1]: cri-containerd-f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0.scope: Deactivated successfully. Nov 5 15:52:01.007079 containerd[1604]: time="2025-11-05T15:52:01.006278649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0\" id:\"f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0\" pid:4711 exited_at:{seconds:1762357921 nanos:5375339}" Nov 5 15:52:01.009934 containerd[1604]: time="2025-11-05T15:52:01.008632017Z" level=info msg="received exit event container_id:\"f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0\" id:\"f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0\" pid:4711 exited_at:{seconds:1762357921 nanos:5375339}" Nov 5 15:52:01.023926 containerd[1604]: time="2025-11-05T15:52:01.023875333Z" level=info msg="StartContainer for \"f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0\" returns successfully" Nov 5 15:52:01.043336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f19d0269d80bf0199a517e2547148db052e817fbdd45d23bac4d43226915a9c0-rootfs.mount: Deactivated successfully. Nov 5 15:52:01.896014 kubelet[2770]: E1105 15:52:01.895939 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:01.907364 containerd[1604]: time="2025-11-05T15:52:01.907301459Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 15:52:01.934692 containerd[1604]: time="2025-11-05T15:52:01.933399343Z" level=info msg="Container b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:01.950828 containerd[1604]: time="2025-11-05T15:52:01.950768536Z" level=info msg="CreateContainer within sandbox \"0704cc91cbb92243ea68b07ff6283872293c20e5db23243452944f44fe43eecc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\"" Nov 5 15:52:01.952006 containerd[1604]: time="2025-11-05T15:52:01.951855794Z" level=info msg="StartContainer for \"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\"" Nov 5 15:52:01.955269 containerd[1604]: time="2025-11-05T15:52:01.955053100Z" level=info msg="connecting to shim b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5" address="unix:///run/containerd/s/6fcd87586100450cb0ee920129893e14250cce2e4216bfdd7757b2adc5dc70a1" protocol=ttrpc version=3 Nov 5 15:52:01.995043 systemd[1]: Started cri-containerd-b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5.scope - libcontainer container b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5. Nov 5 15:52:02.084752 containerd[1604]: time="2025-11-05T15:52:02.083917986Z" level=info msg="StartContainer for \"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\" returns successfully" Nov 5 15:52:02.216444 containerd[1604]: time="2025-11-05T15:52:02.216283110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\" id:\"8b482e0e3f1fc37dd288a4d11535414069686890f6f081edc6153580237e8dec\" pid:4778 exited_at:{seconds:1762357922 nanos:215534196}" Nov 5 15:52:02.453150 kubelet[2770]: E1105 15:52:02.452249 2770 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-fkv7c" podUID="49e5993b-308c-4277-9128-0220771cc7d7" Nov 5 15:52:02.679759 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 5 15:52:02.904253 kubelet[2770]: E1105 15:52:02.904156 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:03.906191 kubelet[2770]: E1105 15:52:03.906124 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:04.453327 kubelet[2770]: E1105 15:52:04.453279 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:04.501528 containerd[1604]: time="2025-11-05T15:52:04.501482238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\" id:\"baa8edfdea4e832c1565d3c58ed98c29e2e379731aa8ec7aa74a51e066e6e541\" pid:4954 exit_status:1 exited_at:{seconds:1762357924 nanos:500722890}" Nov 5 15:52:06.157626 systemd-networkd[1499]: lxc_health: Link UP Nov 5 15:52:06.169448 systemd-networkd[1499]: lxc_health: Gained carrier Nov 5 15:52:06.723404 containerd[1604]: time="2025-11-05T15:52:06.723182891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\" id:\"07f1dde53bca70245098edf28c94b09909b4b011b7656f7ac1d17a01fe107ea4\" pid:5336 exited_at:{seconds:1762357926 nanos:722362082}" Nov 5 15:52:07.376842 systemd-networkd[1499]: lxc_health: Gained IPv6LL Nov 5 15:52:07.832764 kubelet[2770]: E1105 15:52:07.832615 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:07.860935 kubelet[2770]: I1105 15:52:07.860834 2770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xcmns" podStartSLOduration=10.860816273 podStartE2EDuration="10.860816273s" podCreationTimestamp="2025-11-05 15:51:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:02.934609002 +0000 UTC m=+109.687497492" watchObservedRunningTime="2025-11-05 15:52:07.860816273 +0000 UTC m=+114.613704767" Nov 5 15:52:07.917316 kubelet[2770]: E1105 15:52:07.917254 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:08.919472 kubelet[2770]: E1105 15:52:08.919009 2770 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:09.102163 containerd[1604]: time="2025-11-05T15:52:09.102118494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\" id:\"31d5ed6d40a5770c04f4e6d6dff97141f683167c2fbe371e2fc4d49655048aaf\" pid:5376 exited_at:{seconds:1762357929 nanos:100789057}" Nov 5 15:52:09.107215 kubelet[2770]: E1105 15:52:09.107100 2770 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59192->127.0.0.1:39545: write tcp 127.0.0.1:59192->127.0.0.1:39545: write: broken pipe Nov 5 15:52:11.262908 containerd[1604]: time="2025-11-05T15:52:11.262857155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\" id:\"d2906c3282c6bfaa8c090a0a84763f7e985218033517ea1948c6b4d38ecf3385\" pid:5408 exited_at:{seconds:1762357931 nanos:261395750}" Nov 5 15:52:13.414476 containerd[1604]: time="2025-11-05T15:52:13.414366136Z" level=info msg="StopPodSandbox for \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\"" Nov 5 15:52:13.415201 containerd[1604]: time="2025-11-05T15:52:13.414603286Z" level=info msg="TearDown network for sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" successfully" Nov 5 15:52:13.415201 containerd[1604]: time="2025-11-05T15:52:13.414624425Z" level=info msg="StopPodSandbox for \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" returns successfully" Nov 5 15:52:13.415390 containerd[1604]: time="2025-11-05T15:52:13.415352285Z" level=info msg="RemovePodSandbox for \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\"" Nov 5 15:52:13.415453 containerd[1604]: time="2025-11-05T15:52:13.415441412Z" level=info msg="Forcibly stopping sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\"" Nov 5 15:52:13.415622 containerd[1604]: time="2025-11-05T15:52:13.415596012Z" level=info msg="TearDown network for sandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" successfully" Nov 5 15:52:13.417863 containerd[1604]: time="2025-11-05T15:52:13.417820102Z" level=info msg="Ensure that sandbox df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6 in task-service has been cleanup successfully" Nov 5 15:52:13.420411 containerd[1604]: time="2025-11-05T15:52:13.420344020Z" level=info msg="RemovePodSandbox \"df5ce1b0726f884dd70a198a8f2e03a95da950dc5b77c754a9d7bb297b36d7b6\" returns successfully" Nov 5 15:52:13.421472 containerd[1604]: time="2025-11-05T15:52:13.421437822Z" level=info msg="StopPodSandbox for \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\"" Nov 5 15:52:13.421616 containerd[1604]: time="2025-11-05T15:52:13.421592559Z" level=info msg="TearDown network for sandbox \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" successfully" Nov 5 15:52:13.421722 containerd[1604]: time="2025-11-05T15:52:13.421618069Z" level=info msg="StopPodSandbox for \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" returns successfully" Nov 5 15:52:13.422466 containerd[1604]: time="2025-11-05T15:52:13.422432893Z" level=info msg="RemovePodSandbox for \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\"" Nov 5 15:52:13.422558 containerd[1604]: time="2025-11-05T15:52:13.422471418Z" level=info msg="Forcibly stopping sandbox \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\"" Nov 5 15:52:13.422614 containerd[1604]: time="2025-11-05T15:52:13.422592329Z" level=info msg="TearDown network for sandbox \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" successfully" Nov 5 15:52:13.424666 containerd[1604]: time="2025-11-05T15:52:13.424445984Z" level=info msg="Ensure that sandbox dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1 in task-service has been cleanup successfully" Nov 5 15:52:13.427457 containerd[1604]: time="2025-11-05T15:52:13.427397739Z" level=info msg="RemovePodSandbox \"dc98faa278a9dc271992e8d32360a1323abd25bac72b69a31dfdbf3b756787f1\" returns successfully" Nov 5 15:52:13.439549 containerd[1604]: time="2025-11-05T15:52:13.439164044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b27be446120fbcd47c0dcc3f4eb68fabdfabc4b1605def4b5922e36eacdd4bb5\" id:\"5268a4a57902e3ca078a35c815b5bc8b4e16981ddaf0558965199455570de9f1\" pid:5434 exited_at:{seconds:1762357933 nanos:437369843}" Nov 5 15:52:13.443180 kubelet[2770]: E1105 15:52:13.443053 2770 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58106->127.0.0.1:39545: write tcp 127.0.0.1:58106->127.0.0.1:39545: write: broken pipe Nov 5 15:52:13.459730 sshd[4516]: Connection closed by 139.178.68.195 port 39040 Nov 5 15:52:13.460336 sshd-session[4509]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:13.468838 systemd[1]: sshd@25-24.144.92.23:22-139.178.68.195:39040.service: Deactivated successfully. Nov 5 15:52:13.472878 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:52:13.476775 systemd-logind[1571]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:52:13.478310 systemd-logind[1571]: Removed session 26.