Nov 6 00:20:50.999363 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:20:50.999406 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:20:50.999427 kernel: BIOS-provided physical RAM map: Nov 6 00:20:50.999441 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 00:20:50.999453 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 00:20:50.999465 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 00:20:50.999480 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 6 00:20:50.999500 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 6 00:20:50.999512 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:20:50.999524 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 00:20:50.999541 kernel: NX (Execute Disable) protection: active Nov 6 00:20:50.999583 kernel: APIC: Static calls initialized Nov 6 00:20:50.999596 kernel: SMBIOS 2.8 present. Nov 6 00:20:50.999609 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 6 00:20:50.999625 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:20:50.999638 kernel: Hypervisor detected: KVM Nov 6 00:20:50.999660 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 6 00:20:50.999673 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:20:50.999687 kernel: kvm-clock: using sched offset of 5671471478 cycles Nov 6 00:20:50.999699 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:20:50.999710 kernel: tsc: Detected 1995.312 MHz processor Nov 6 00:20:50.999722 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:20:50.999737 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:20:50.999750 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 6 00:20:50.999764 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 00:20:50.999781 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:20:50.999795 kernel: ACPI: Early table checksum verification disabled Nov 6 00:20:50.999809 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 6 00:20:50.999822 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:20:50.999835 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:20:50.999849 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:20:50.999862 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 6 00:20:50.999875 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:20:50.999889 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:20:50.999906 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:20:50.999920 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:20:50.999934 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 6 00:20:50.999948 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 6 00:20:50.999961 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 6 00:20:50.999975 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 6 00:20:50.999996 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 6 00:20:51.000014 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 6 00:20:51.000029 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 6 00:20:51.000044 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 00:20:51.000058 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 6 00:20:51.000073 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 6 00:20:51.000088 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 6 00:20:51.000102 kernel: Zone ranges: Nov 6 00:20:51.000120 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:20:51.000135 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 6 00:20:51.000149 kernel: Normal empty Nov 6 00:20:51.000164 kernel: Device empty Nov 6 00:20:51.000179 kernel: Movable zone start for each node Nov 6 00:20:51.000193 kernel: Early memory node ranges Nov 6 00:20:51.000208 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 00:20:51.000223 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 6 00:20:51.000238 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 6 00:20:51.000253 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:20:51.000271 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:20:51.000286 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 6 00:20:51.000301 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:20:51.000321 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:20:51.000336 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:20:51.000356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:20:51.000371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:20:51.000386 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:20:51.000404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:20:51.000421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:20:51.000433 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:20:51.000446 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:20:51.000459 kernel: TSC deadline timer available Nov 6 00:20:51.000472 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:20:51.000485 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:20:51.000499 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:20:51.000511 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:20:51.000524 kernel: CPU topo: Num. cores per package: 2 Nov 6 00:20:51.000541 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:20:51.000574 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:20:51.000606 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:20:51.000621 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 6 00:20:51.000635 kernel: Booting paravirtualized kernel on KVM Nov 6 00:20:51.000648 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:20:51.000662 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:20:51.000676 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:20:51.000689 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:20:51.000705 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:20:51.000719 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 6 00:20:51.000736 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:20:51.000750 kernel: random: crng init done Nov 6 00:20:51.000763 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:20:51.000778 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:20:51.000792 kernel: Fallback order for Node 0: 0 Nov 6 00:20:51.000805 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 6 00:20:51.000823 kernel: Policy zone: DMA32 Nov 6 00:20:51.000838 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:20:51.000850 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:20:51.000865 kernel: Kernel/User page tables isolation: enabled Nov 6 00:20:51.000878 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:20:51.000892 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:20:51.000905 kernel: Dynamic Preempt: voluntary Nov 6 00:20:51.000920 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:20:51.000936 kernel: rcu: RCU event tracing is enabled. Nov 6 00:20:51.000954 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:20:51.000969 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:20:51.000983 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:20:51.000998 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:20:51.001013 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:20:51.001027 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:20:51.001040 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:20:51.001061 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:20:51.001075 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:20:51.001094 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 00:20:51.001108 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:20:51.001122 kernel: Console: colour VGA+ 80x25 Nov 6 00:20:51.001136 kernel: printk: legacy console [tty0] enabled Nov 6 00:20:51.001150 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:20:51.001177 kernel: ACPI: Core revision 20240827 Nov 6 00:20:51.001192 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:20:51.001221 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:20:51.001236 kernel: x2apic enabled Nov 6 00:20:51.001251 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:20:51.001265 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:20:51.001280 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 6 00:20:51.001305 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Nov 6 00:20:51.001318 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 6 00:20:51.001334 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 6 00:20:51.001348 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:20:51.001364 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:20:51.001383 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:20:51.001398 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 6 00:20:51.001413 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:20:51.001428 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:20:51.001443 kernel: MDS: Mitigation: Clear CPU buffers Nov 6 00:20:51.001458 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 6 00:20:51.001473 kernel: active return thunk: its_return_thunk Nov 6 00:20:51.001489 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 00:20:51.001504 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:20:51.001523 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:20:51.001537 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:20:51.002608 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:20:51.002666 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 6 00:20:51.002683 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:20:51.002696 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:20:51.002710 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:20:51.002726 kernel: landlock: Up and running. Nov 6 00:20:51.002742 kernel: SELinux: Initializing. Nov 6 00:20:51.002766 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:20:51.002782 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:20:51.002798 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 6 00:20:51.002814 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 6 00:20:51.002830 kernel: signal: max sigframe size: 1776 Nov 6 00:20:51.002847 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:20:51.002864 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:20:51.002880 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:20:51.002900 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 00:20:51.002915 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:20:51.003002 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:20:51.003019 kernel: .... node #0, CPUs: #1 Nov 6 00:20:51.003035 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:20:51.003051 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Nov 6 00:20:51.003069 kernel: Memory: 1960760K/2096612K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 131288K reserved, 0K cma-reserved) Nov 6 00:20:51.003085 kernel: devtmpfs: initialized Nov 6 00:20:51.003101 kernel: x86/mm: Memory block size: 128MB Nov 6 00:20:51.003117 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:20:51.003136 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:20:51.003152 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:20:51.003168 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:20:51.003184 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:20:51.003200 kernel: audit: type=2000 audit(1762388446.588:1): state=initialized audit_enabled=0 res=1 Nov 6 00:20:51.003216 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:20:51.003232 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:20:51.003248 kernel: cpuidle: using governor menu Nov 6 00:20:51.003267 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:20:51.003283 kernel: dca service started, version 1.12.1 Nov 6 00:20:51.003299 kernel: PCI: Using configuration type 1 for base access Nov 6 00:20:51.003315 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:20:51.003330 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:20:51.003346 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:20:51.003361 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:20:51.003377 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:20:51.003393 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:20:51.003412 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:20:51.003428 kernel: ACPI: Interpreter enabled Nov 6 00:20:51.003444 kernel: ACPI: PM: (supports S0 S5) Nov 6 00:20:51.003459 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:20:51.003475 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:20:51.003492 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:20:51.003506 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 6 00:20:51.003520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:20:51.003882 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:20:51.004057 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 6 00:20:51.004209 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 6 00:20:51.004230 kernel: acpiphp: Slot [3] registered Nov 6 00:20:51.004245 kernel: acpiphp: Slot [4] registered Nov 6 00:20:51.004260 kernel: acpiphp: Slot [5] registered Nov 6 00:20:51.004274 kernel: acpiphp: Slot [6] registered Nov 6 00:20:51.004290 kernel: acpiphp: Slot [7] registered Nov 6 00:20:51.004305 kernel: acpiphp: Slot [8] registered Nov 6 00:20:51.004325 kernel: acpiphp: Slot [9] registered Nov 6 00:20:51.004339 kernel: acpiphp: Slot [10] registered Nov 6 00:20:51.004354 kernel: acpiphp: Slot [11] registered Nov 6 00:20:51.004369 kernel: acpiphp: Slot [12] registered Nov 6 00:20:51.004384 kernel: acpiphp: Slot [13] registered Nov 6 00:20:51.004465 kernel: acpiphp: Slot [14] registered Nov 6 00:20:51.004481 kernel: acpiphp: Slot [15] registered Nov 6 00:20:51.004506 kernel: acpiphp: Slot [16] registered Nov 6 00:20:51.004519 kernel: acpiphp: Slot [17] registered Nov 6 00:20:51.004539 kernel: acpiphp: Slot [18] registered Nov 6 00:20:51.005600 kernel: acpiphp: Slot [19] registered Nov 6 00:20:51.005633 kernel: acpiphp: Slot [20] registered Nov 6 00:20:51.005650 kernel: acpiphp: Slot [21] registered Nov 6 00:20:51.005666 kernel: acpiphp: Slot [22] registered Nov 6 00:20:51.005683 kernel: acpiphp: Slot [23] registered Nov 6 00:20:51.005696 kernel: acpiphp: Slot [24] registered Nov 6 00:20:51.005708 kernel: acpiphp: Slot [25] registered Nov 6 00:20:51.005723 kernel: acpiphp: Slot [26] registered Nov 6 00:20:51.005739 kernel: acpiphp: Slot [27] registered Nov 6 00:20:51.005762 kernel: acpiphp: Slot [28] registered Nov 6 00:20:51.005777 kernel: acpiphp: Slot [29] registered Nov 6 00:20:51.005793 kernel: acpiphp: Slot [30] registered Nov 6 00:20:51.005809 kernel: acpiphp: Slot [31] registered Nov 6 00:20:51.005825 kernel: PCI host bridge to bus 0000:00 Nov 6 00:20:51.006085 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:20:51.006237 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:20:51.006374 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:20:51.007677 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 6 00:20:51.007852 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 6 00:20:51.007991 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:20:51.008232 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:20:51.008440 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:20:51.008675 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 6 00:20:51.008837 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 6 00:20:51.008987 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 6 00:20:51.009136 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 6 00:20:51.009312 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 6 00:20:51.009468 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 6 00:20:51.010547 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 6 00:20:51.010752 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 6 00:20:51.010925 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 6 00:20:51.011075 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 6 00:20:51.011220 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 6 00:20:51.011406 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:20:51.013478 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 6 00:20:51.013759 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 6 00:20:51.013938 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 6 00:20:51.014091 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 6 00:20:51.014243 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:20:51.014434 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:20:51.014642 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 6 00:20:51.014792 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 6 00:20:51.014939 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 6 00:20:51.015118 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:20:51.015275 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 6 00:20:51.015428 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 6 00:20:51.015995 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 6 00:20:51.016195 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:20:51.016353 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 6 00:20:51.016507 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 6 00:20:51.017250 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 6 00:20:51.017446 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:20:51.017643 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 6 00:20:51.017797 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 6 00:20:51.017949 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 6 00:20:51.018137 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:20:51.018347 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 6 00:20:51.018509 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 6 00:20:51.018689 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 6 00:20:51.018889 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:20:51.019047 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 6 00:20:51.019208 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 6 00:20:51.019227 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:20:51.019252 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:20:51.019268 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:20:51.019284 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:20:51.019298 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 6 00:20:51.019326 kernel: iommu: Default domain type: Translated Nov 6 00:20:51.019343 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:20:51.019359 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:20:51.019375 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:20:51.019391 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 00:20:51.019411 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 6 00:20:51.019663 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 6 00:20:51.019863 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 6 00:20:51.020030 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:20:51.020051 kernel: vgaarb: loaded Nov 6 00:20:51.020068 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:20:51.020084 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:20:51.020100 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:20:51.020122 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:20:51.020137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:20:51.020151 kernel: pnp: PnP ACPI init Nov 6 00:20:51.020167 kernel: pnp: PnP ACPI: found 4 devices Nov 6 00:20:51.020182 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:20:51.020198 kernel: NET: Registered PF_INET protocol family Nov 6 00:20:51.020214 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:20:51.020227 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 00:20:51.020241 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:20:51.020260 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:20:51.020275 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 00:20:51.020291 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 00:20:51.020306 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:20:51.020320 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:20:51.020335 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:20:51.020351 kernel: NET: Registered PF_XDP protocol family Nov 6 00:20:51.020516 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:20:51.020718 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:20:51.020873 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:20:51.021009 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 6 00:20:51.021146 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 6 00:20:51.021324 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 6 00:20:51.021481 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 6 00:20:51.021504 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 6 00:20:51.021694 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 29753 usecs Nov 6 00:20:51.021714 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:20:51.021736 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 00:20:51.021752 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 6 00:20:51.021768 kernel: Initialise system trusted keyrings Nov 6 00:20:51.021783 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 00:20:51.021799 kernel: Key type asymmetric registered Nov 6 00:20:51.021814 kernel: Asymmetric key parser 'x509' registered Nov 6 00:20:51.021829 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:20:51.021845 kernel: io scheduler mq-deadline registered Nov 6 00:20:51.021859 kernel: io scheduler kyber registered Nov 6 00:20:51.021877 kernel: io scheduler bfq registered Nov 6 00:20:51.021893 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:20:51.021908 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 6 00:20:51.021921 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 6 00:20:51.021935 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 6 00:20:51.021950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:20:51.021965 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:20:51.021981 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:20:51.021996 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:20:51.022014 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:20:51.022029 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:20:51.022231 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 6 00:20:51.022373 kernel: rtc_cmos 00:03: registered as rtc0 Nov 6 00:20:51.022506 kernel: rtc_cmos 00:03: setting system clock to 2025-11-06T00:20:50 UTC (1762388450) Nov 6 00:20:51.022666 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 6 00:20:51.022686 kernel: intel_pstate: CPU model not supported Nov 6 00:20:51.022705 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:20:51.022720 kernel: Segment Routing with IPv6 Nov 6 00:20:51.022734 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:20:51.022750 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:20:51.022766 kernel: Key type dns_resolver registered Nov 6 00:20:51.022781 kernel: IPI shorthand broadcast: enabled Nov 6 00:20:51.022796 kernel: sched_clock: Marking stable (3787004770, 295166484)->(4306936766, -224765512) Nov 6 00:20:51.022812 kernel: registered taskstats version 1 Nov 6 00:20:51.022827 kernel: Loading compiled-in X.509 certificates Nov 6 00:20:51.022842 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:20:51.022860 kernel: Demotion targets for Node 0: null Nov 6 00:20:51.022875 kernel: Key type .fscrypt registered Nov 6 00:20:51.022891 kernel: Key type fscrypt-provisioning registered Nov 6 00:20:51.022928 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:20:51.022946 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:20:51.022961 kernel: ima: No architecture policies found Nov 6 00:20:51.022977 kernel: clk: Disabling unused clocks Nov 6 00:20:51.022992 kernel: Warning: unable to open an initial console. Nov 6 00:20:51.023011 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:20:51.024629 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:20:51.024667 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:20:51.024685 kernel: Run /init as init process Nov 6 00:20:51.024698 kernel: with arguments: Nov 6 00:20:51.024713 kernel: /init Nov 6 00:20:51.024729 kernel: with environment: Nov 6 00:20:51.024746 kernel: HOME=/ Nov 6 00:20:51.024761 kernel: TERM=linux Nov 6 00:20:51.024780 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:20:51.024811 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:20:51.024829 systemd[1]: Detected virtualization kvm. Nov 6 00:20:51.024845 systemd[1]: Detected architecture x86-64. Nov 6 00:20:51.024861 systemd[1]: Running in initrd. Nov 6 00:20:51.024877 systemd[1]: No hostname configured, using default hostname. Nov 6 00:20:51.024894 systemd[1]: Hostname set to . Nov 6 00:20:51.024910 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:20:51.024930 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:20:51.024948 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:20:51.024965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:20:51.024985 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:20:51.025006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:20:51.025023 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:20:51.025046 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:20:51.025066 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:20:51.025083 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:20:51.025101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:20:51.025119 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:20:51.025140 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:20:51.025175 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:20:51.025192 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:20:51.025209 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:20:51.025227 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:20:51.025244 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:20:51.025262 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:20:51.025279 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:20:51.025296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:20:51.025316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:20:51.025334 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:20:51.025351 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:20:51.025368 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:20:51.025384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:20:51.025401 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:20:51.025418 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:20:51.025434 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:20:51.025451 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:20:51.025471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:20:51.025488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:20:51.025506 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:20:51.025524 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:20:51.025545 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:20:51.027728 systemd-journald[192]: Collecting audit messages is disabled. Nov 6 00:20:51.027778 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:20:51.027797 systemd-journald[192]: Journal started Nov 6 00:20:51.027842 systemd-journald[192]: Runtime Journal (/run/log/journal/26fc2701f5e54c569b8045bc030ced02) is 4.9M, max 39.2M, 34.3M free. Nov 6 00:20:50.997085 systemd-modules-load[194]: Inserted module 'overlay' Nov 6 00:20:51.131111 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:20:51.131157 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:20:51.131181 kernel: Bridge firewalling registered Nov 6 00:20:51.049960 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 6 00:20:51.133113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:20:51.134536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:20:51.136371 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:20:51.141789 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:20:51.145809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:20:51.149134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:20:51.152705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:20:51.172909 systemd-tmpfiles[211]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:20:51.177849 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:20:51.187664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:20:51.190636 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:20:51.195373 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:20:51.196346 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:20:51.200754 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:20:51.234937 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:20:51.255380 systemd-resolved[231]: Positive Trust Anchors: Nov 6 00:20:51.255401 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:20:51.255436 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:20:51.258752 systemd-resolved[231]: Defaulting to hostname 'linux'. Nov 6 00:20:51.261330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:20:51.266416 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:20:51.375601 kernel: SCSI subsystem initialized Nov 6 00:20:51.387594 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:20:51.401639 kernel: iscsi: registered transport (tcp) Nov 6 00:20:51.427908 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:20:51.428000 kernel: QLogic iSCSI HBA Driver Nov 6 00:20:51.454355 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:20:51.473936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:20:51.478754 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:20:51.546296 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:20:51.549740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:20:51.612684 kernel: raid6: avx2x4 gen() 18292 MB/s Nov 6 00:20:51.630633 kernel: raid6: avx2x2 gen() 17157 MB/s Nov 6 00:20:51.648886 kernel: raid6: avx2x1 gen() 12298 MB/s Nov 6 00:20:51.648984 kernel: raid6: using algorithm avx2x4 gen() 18292 MB/s Nov 6 00:20:51.668977 kernel: raid6: .... xor() 7231 MB/s, rmw enabled Nov 6 00:20:51.669078 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:20:51.703604 kernel: xor: automatically using best checksumming function avx Nov 6 00:20:51.880614 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:20:51.889606 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:20:51.894188 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:20:51.927275 systemd-udevd[441]: Using default interface naming scheme 'v255'. Nov 6 00:20:51.933207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:20:51.937350 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:20:51.972883 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Nov 6 00:20:52.008528 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:20:52.011037 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:20:52.083867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:20:52.088789 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:20:52.200304 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 6 00:20:52.201430 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 6 00:20:52.213611 kernel: scsi host0: Virtio SCSI HBA Nov 6 00:20:52.222583 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:20:52.247591 kernel: libata version 3.00 loaded. Nov 6 00:20:52.250596 kernel: AES CTR mode by8 optimization enabled Nov 6 00:20:52.252879 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 6 00:20:52.255593 kernel: scsi host1: ata_piix Nov 6 00:20:52.266389 kernel: scsi host2: ata_piix Nov 6 00:20:52.266726 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 6 00:20:52.266751 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 6 00:20:52.274068 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 6 00:20:52.292583 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:20:52.292867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:20:52.293038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:20:52.296790 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:20:52.303515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:20:52.310445 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:20:52.348176 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:20:52.348249 kernel: GPT:9289727 != 125829119 Nov 6 00:20:52.348261 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:20:52.348279 kernel: GPT:9289727 != 125829119 Nov 6 00:20:52.348295 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:20:52.348354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:20:52.351625 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 6 00:20:52.357034 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 6 00:20:52.361608 kernel: ACPI: bus type USB registered Nov 6 00:20:52.367631 kernel: usbcore: registered new interface driver usbfs Nov 6 00:20:52.367715 kernel: usbcore: registered new interface driver hub Nov 6 00:20:52.367728 kernel: usbcore: registered new device driver usb Nov 6 00:20:52.483842 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:20:52.509293 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 00:20:52.525812 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 6 00:20:52.526105 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 6 00:20:52.527976 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 6 00:20:52.529542 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 00:20:52.536461 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 6 00:20:52.536704 kernel: hub 1-0:1.0: USB hub found Nov 6 00:20:52.536857 kernel: hub 1-0:1.0: 2 ports detected Nov 6 00:20:52.548004 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 00:20:52.549096 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 6 00:20:52.558601 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:20:52.583630 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:20:52.584627 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:20:52.586601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:20:52.588674 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:20:52.591487 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:20:52.595754 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:20:52.611337 disk-uuid[599]: Primary Header is updated. Nov 6 00:20:52.611337 disk-uuid[599]: Secondary Entries is updated. Nov 6 00:20:52.611337 disk-uuid[599]: Secondary Header is updated. Nov 6 00:20:52.622953 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:20:52.629573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:20:52.631055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:20:53.637073 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:20:53.637855 disk-uuid[600]: The operation has completed successfully. Nov 6 00:20:53.689788 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:20:53.689920 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:20:53.721998 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:20:53.756520 sh[618]: Success Nov 6 00:20:53.787505 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:20:53.787621 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:20:53.791588 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:20:53.804594 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 6 00:20:53.866505 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:20:53.869684 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:20:53.882840 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:20:53.895627 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (630) Nov 6 00:20:53.900633 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:20:53.900749 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:20:53.912094 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:20:53.912230 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:20:53.916525 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:20:53.918306 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:20:53.919461 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:20:53.921765 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:20:53.924821 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:20:53.957607 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Nov 6 00:20:53.963401 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:20:53.963469 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:20:53.970646 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:20:53.970723 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:20:53.979596 kernel: BTRFS info (device vda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:20:53.980997 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:20:53.984742 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:20:54.108796 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:20:54.132859 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:20:54.248793 systemd-networkd[800]: lo: Link UP Nov 6 00:20:54.248809 systemd-networkd[800]: lo: Gained carrier Nov 6 00:20:54.257872 systemd-networkd[800]: Enumeration completed Nov 6 00:20:54.258467 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 6 00:20:54.258471 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 6 00:20:54.259864 systemd-networkd[800]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:20:54.259868 systemd-networkd[800]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:20:54.260580 systemd-networkd[800]: eth0: Link UP Nov 6 00:20:54.260753 systemd-networkd[800]: eth1: Link UP Nov 6 00:20:54.260884 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:20:54.260951 systemd-networkd[800]: eth0: Gained carrier Nov 6 00:20:54.260963 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 6 00:20:54.264833 systemd[1]: Reached target network.target - Network. Nov 6 00:20:54.268832 systemd-networkd[800]: eth1: Gained carrier Nov 6 00:20:54.268850 systemd-networkd[800]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:20:54.283656 systemd-networkd[800]: eth0: DHCPv4 address 147.182.203.34/20, gateway 147.182.192.1 acquired from 169.254.169.253 Nov 6 00:20:54.284650 ignition[707]: Ignition 2.22.0 Nov 6 00:20:54.284664 ignition[707]: Stage: fetch-offline Nov 6 00:20:54.284731 ignition[707]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:20:54.287519 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:20:54.284745 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:20:54.284944 ignition[707]: parsed url from cmdline: "" Nov 6 00:20:54.284952 ignition[707]: no config URL provided Nov 6 00:20:54.284965 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:20:54.284982 ignition[707]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:20:54.294910 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:20:54.284993 ignition[707]: failed to fetch config: resource requires networking Nov 6 00:20:54.285312 ignition[707]: Ignition finished successfully Nov 6 00:20:54.298687 systemd-networkd[800]: eth1: DHCPv4 address 10.124.0.35/20 acquired from 169.254.169.253 Nov 6 00:20:54.333528 ignition[810]: Ignition 2.22.0 Nov 6 00:20:54.333590 ignition[810]: Stage: fetch Nov 6 00:20:54.333836 ignition[810]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:20:54.333851 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:20:54.333973 ignition[810]: parsed url from cmdline: "" Nov 6 00:20:54.333980 ignition[810]: no config URL provided Nov 6 00:20:54.333990 ignition[810]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:20:54.334002 ignition[810]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:20:54.334045 ignition[810]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 6 00:20:54.349949 ignition[810]: GET result: OK Nov 6 00:20:54.353904 ignition[810]: parsing config with SHA512: f96d7732be8683f4377aff77ddec83492845129f054082c3ecec165bc1c416e57b7beae4dbfdfbb1f6d978d861f997aa321fc06fa472d3de73f7f7a0859811dd Nov 6 00:20:54.360192 unknown[810]: fetched base config from "system" Nov 6 00:20:54.360212 unknown[810]: fetched base config from "system" Nov 6 00:20:54.361051 ignition[810]: fetch: fetch complete Nov 6 00:20:54.360221 unknown[810]: fetched user config from "digitalocean" Nov 6 00:20:54.361060 ignition[810]: fetch: fetch passed Nov 6 00:20:54.364789 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:20:54.361161 ignition[810]: Ignition finished successfully Nov 6 00:20:54.367782 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:20:54.437408 ignition[816]: Ignition 2.22.0 Nov 6 00:20:54.437433 ignition[816]: Stage: kargs Nov 6 00:20:54.437738 ignition[816]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:20:54.437759 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:20:54.441429 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:20:54.439534 ignition[816]: kargs: kargs passed Nov 6 00:20:54.439663 ignition[816]: Ignition finished successfully Nov 6 00:20:54.446738 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:20:54.501703 ignition[823]: Ignition 2.22.0 Nov 6 00:20:54.501721 ignition[823]: Stage: disks Nov 6 00:20:54.501932 ignition[823]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:20:54.504435 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:20:54.501944 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:20:54.505986 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:20:54.502813 ignition[823]: disks: disks passed Nov 6 00:20:54.506980 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:20:54.502878 ignition[823]: Ignition finished successfully Nov 6 00:20:54.508463 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:20:54.510144 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:20:54.511810 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:20:54.515726 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:20:54.549747 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 6 00:20:54.554977 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:20:54.557722 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:20:54.716589 kernel: EXT4-fs (vda9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:20:54.718618 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:20:54.720431 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:20:54.725019 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:20:54.728641 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:20:54.740112 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 6 00:20:54.748890 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 00:20:54.751805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:20:54.762193 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Nov 6 00:20:54.762241 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:20:54.751954 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:20:54.767847 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:20:54.771062 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:20:54.784764 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:20:54.794662 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:20:54.794809 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:20:54.808668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:20:54.877756 coreos-metadata[843]: Nov 06 00:20:54.877 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 00:20:54.880887 coreos-metadata[842]: Nov 06 00:20:54.880 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 00:20:54.891066 coreos-metadata[843]: Nov 06 00:20:54.890 INFO Fetch successful Nov 6 00:20:54.892018 coreos-metadata[842]: Nov 06 00:20:54.891 INFO Fetch successful Nov 6 00:20:54.903276 initrd-setup-root[870]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:20:54.906677 coreos-metadata[843]: Nov 06 00:20:54.903 INFO wrote hostname ci-4459.1.0-n-800cd2f73d to /sysroot/etc/hostname Nov 6 00:20:54.905954 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 6 00:20:54.907784 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 6 00:20:54.910833 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:20:54.916121 initrd-setup-root[878]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:20:54.924239 initrd-setup-root[886]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:20:54.932119 initrd-setup-root[893]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:20:55.072832 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:20:55.074942 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:20:55.080778 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:20:55.097578 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:20:55.102654 kernel: BTRFS info (device vda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:20:55.116739 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:20:55.146182 ignition[963]: INFO : Ignition 2.22.0 Nov 6 00:20:55.146182 ignition[963]: INFO : Stage: mount Nov 6 00:20:55.148765 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:20:55.148765 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:20:55.152000 ignition[963]: INFO : mount: mount passed Nov 6 00:20:55.152000 ignition[963]: INFO : Ignition finished successfully Nov 6 00:20:55.151588 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:20:55.155677 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:20:55.177636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:20:55.200605 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (972) Nov 6 00:20:55.204046 kernel: BTRFS info (device vda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:20:55.204128 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:20:55.211985 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:20:55.212078 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:20:55.215161 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:20:55.259596 ignition[989]: INFO : Ignition 2.22.0 Nov 6 00:20:55.259596 ignition[989]: INFO : Stage: files Nov 6 00:20:55.259596 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:20:55.259596 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:20:55.264136 ignition[989]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:20:55.264136 ignition[989]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:20:55.264136 ignition[989]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:20:55.267291 ignition[989]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:20:55.267291 ignition[989]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:20:55.267291 ignition[989]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:20:55.266922 unknown[989]: wrote ssh authorized keys file for user: core Nov 6 00:20:55.271814 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 00:20:55.271814 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 6 00:20:55.364002 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:20:55.378706 systemd-networkd[800]: eth0: Gained IPv6LL Nov 6 00:20:55.529960 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 6 00:20:55.529960 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:20:55.532528 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 6 00:20:55.774040 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 00:20:55.890801 systemd-networkd[800]: eth1: Gained IPv6LL Nov 6 00:20:55.918698 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 00:20:55.918698 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:20:55.921938 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:20:55.921938 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:20:55.921938 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:20:55.921938 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:20:55.921938 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:20:55.921938 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:20:55.921938 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:20:55.930850 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:20:55.930850 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:20:55.930850 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:20:55.930850 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:20:55.930850 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:20:55.930850 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 6 00:20:56.347541 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 00:20:57.719829 ignition[989]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 6 00:20:57.719829 ignition[989]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 00:20:57.722760 ignition[989]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:20:57.724484 ignition[989]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:20:57.724484 ignition[989]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 00:20:57.724484 ignition[989]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:20:57.728148 ignition[989]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:20:57.728148 ignition[989]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:20:57.728148 ignition[989]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:20:57.728148 ignition[989]: INFO : files: files passed Nov 6 00:20:57.728148 ignition[989]: INFO : Ignition finished successfully Nov 6 00:20:57.727258 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:20:57.732746 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:20:57.737737 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:20:57.750646 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:20:57.750788 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:20:57.760586 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:20:57.760586 initrd-setup-root-after-ignition[1018]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:20:57.764248 initrd-setup-root-after-ignition[1022]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:20:57.765284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:20:57.766934 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:20:57.769673 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:20:57.842532 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:20:57.842713 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:20:57.844826 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:20:57.846135 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:20:57.847923 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:20:57.850809 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:20:57.890266 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:20:57.892924 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:20:57.925508 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:20:57.927445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:20:57.928391 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:20:57.929320 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:20:57.929513 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:20:57.931618 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:20:57.932703 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:20:57.934510 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:20:57.936182 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:20:57.937808 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:20:57.939310 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:20:57.940879 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:20:57.942505 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:20:57.944064 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:20:57.946098 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:20:57.947603 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:20:57.949047 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:20:57.949324 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:20:57.951310 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:20:57.952498 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:20:57.954186 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:20:57.954346 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:20:57.956000 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:20:57.956266 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:20:57.958502 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:20:57.958850 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:20:57.960860 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:20:57.961100 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:20:57.962762 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 00:20:57.963019 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:20:57.966843 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:20:57.971991 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:20:57.975146 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:20:57.975538 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:20:57.991229 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:20:57.991421 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:20:58.001238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:20:58.002718 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:20:58.025210 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:20:58.042960 ignition[1042]: INFO : Ignition 2.22.0 Nov 6 00:20:58.062385 ignition[1042]: INFO : Stage: umount Nov 6 00:20:58.062385 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:20:58.062385 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 6 00:20:58.078798 ignition[1042]: INFO : umount: umount passed Nov 6 00:20:58.078798 ignition[1042]: INFO : Ignition finished successfully Nov 6 00:20:58.066125 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:20:58.066298 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:20:58.080165 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:20:58.080315 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:20:58.081972 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:20:58.082079 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:20:58.084611 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:20:58.084710 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:20:58.100213 systemd[1]: Stopped target network.target - Network. Nov 6 00:20:58.101767 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:20:58.101890 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:20:58.103225 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:20:58.104602 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:20:58.104683 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:20:58.106150 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:20:58.107634 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:20:58.109352 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:20:58.109412 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:20:58.110749 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:20:58.110807 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:20:58.146940 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:20:58.147045 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:20:58.150941 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:20:58.151041 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:20:58.153942 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:20:58.155328 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:20:58.158335 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:20:58.159400 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:20:58.160474 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:20:58.160617 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:20:58.164898 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:20:58.168060 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:20:58.168201 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:20:58.170738 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:20:58.171630 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:20:58.172775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:20:58.172818 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:20:58.174421 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:20:58.174493 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:20:58.176810 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:20:58.179715 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:20:58.179825 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:20:58.180845 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:20:58.180939 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:20:58.184984 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:20:58.185076 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:20:58.186171 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:20:58.186247 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:20:58.188339 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:20:58.196198 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:20:58.196333 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:20:58.208061 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:20:58.209675 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:20:58.212023 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:20:58.212140 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:20:58.214142 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:20:58.214228 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:20:58.215747 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:20:58.215792 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:20:58.217905 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:20:58.218006 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:20:58.220092 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:20:58.220160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:20:58.221980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:20:58.222060 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:20:58.225749 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:20:58.227365 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:20:58.227444 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:20:58.231203 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:20:58.231278 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:20:58.233204 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 00:20:58.233293 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:20:58.234482 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:20:58.234530 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:20:58.236058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:20:58.236119 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:20:58.242185 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 00:20:58.242264 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 6 00:20:58.242299 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 00:20:58.242392 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:20:58.251023 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:20:58.251239 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:20:58.257315 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:20:58.259524 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:20:58.279439 systemd[1]: Switching root. Nov 6 00:20:58.354934 systemd-journald[192]: Journal stopped Nov 6 00:20:59.764732 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 6 00:20:59.764857 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:20:59.764884 kernel: SELinux: policy capability open_perms=1 Nov 6 00:20:59.764909 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:20:59.764926 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:20:59.764944 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:20:59.764963 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:20:59.764983 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:20:59.765002 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:20:59.765024 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:20:59.765053 kernel: audit: type=1403 audit(1762388458.517:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:20:59.765081 systemd[1]: Successfully loaded SELinux policy in 77.206ms. Nov 6 00:20:59.765134 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.538ms. Nov 6 00:20:59.765158 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:20:59.765181 systemd[1]: Detected virtualization kvm. Nov 6 00:20:59.765208 systemd[1]: Detected architecture x86-64. Nov 6 00:20:59.765231 systemd[1]: Detected first boot. Nov 6 00:20:59.765257 systemd[1]: Hostname set to . Nov 6 00:20:59.765277 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:20:59.765304 zram_generator::config[1089]: No configuration found. Nov 6 00:20:59.765329 kernel: Guest personality initialized and is inactive Nov 6 00:20:59.765350 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:20:59.765368 kernel: Initialized host personality Nov 6 00:20:59.765387 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:20:59.765407 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:20:59.765429 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:20:59.765456 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:20:59.765475 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:20:59.765494 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:20:59.765517 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:20:59.765538 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:20:59.765578 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:20:59.765599 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:20:59.765620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:20:59.765647 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:20:59.765668 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:20:59.765686 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:20:59.765706 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:20:59.765727 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:20:59.765747 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:20:59.765768 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:20:59.765794 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:20:59.765814 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:20:59.765831 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:20:59.765849 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:20:59.765868 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:20:59.765886 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:20:59.765903 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:20:59.765924 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:20:59.765950 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:20:59.765971 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:20:59.765990 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:20:59.766009 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:20:59.766026 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:20:59.766044 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:20:59.766061 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:20:59.766080 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:20:59.766099 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:20:59.766118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:20:59.766139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:20:59.766159 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:20:59.766179 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:20:59.766201 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:20:59.766221 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:20:59.766240 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:20:59.766259 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:20:59.766277 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:20:59.766296 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:20:59.766320 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:20:59.766339 systemd[1]: Reached target machines.target - Containers. Nov 6 00:20:59.766359 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:20:59.766381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:20:59.766402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:20:59.766421 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:20:59.766442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:20:59.766462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:20:59.766485 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:20:59.766506 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:20:59.766529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:20:59.768638 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:20:59.768712 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:20:59.768733 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:20:59.768753 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:20:59.768774 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:20:59.768808 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:20:59.768829 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:20:59.768851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:20:59.768873 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:20:59.768894 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:20:59.768914 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:20:59.768939 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:20:59.768960 kernel: loop: module loaded Nov 6 00:20:59.768982 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:20:59.769003 systemd[1]: Stopped verity-setup.service. Nov 6 00:20:59.769026 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:20:59.769050 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:20:59.769069 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:20:59.769091 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:20:59.769111 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:20:59.769148 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:20:59.769168 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:20:59.769189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:20:59.769212 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:20:59.769238 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:20:59.769256 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:20:59.769274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:20:59.769296 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:20:59.769317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:20:59.769336 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:20:59.769355 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:20:59.769377 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:20:59.769398 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:20:59.769421 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:20:59.769440 kernel: fuse: init (API version 7.41) Nov 6 00:20:59.769460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:20:59.769528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:20:59.769570 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:20:59.769590 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:20:59.769614 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:20:59.769632 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:20:59.769653 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:20:59.769675 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:20:59.769766 systemd-journald[1155]: Collecting audit messages is disabled. Nov 6 00:20:59.769828 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:20:59.769849 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:20:59.769868 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:20:59.769887 systemd-journald[1155]: Journal started Nov 6 00:20:59.769924 systemd-journald[1155]: Runtime Journal (/run/log/journal/26fc2701f5e54c569b8045bc030ced02) is 4.9M, max 39.2M, 34.3M free. Nov 6 00:20:59.270950 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:20:59.295486 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 00:20:59.296005 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:20:59.797925 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:20:59.803608 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:20:59.809633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:20:59.816606 kernel: ACPI: bus type drm_connector registered Nov 6 00:20:59.823266 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:20:59.828607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:20:59.836611 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:20:59.845608 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:20:59.859613 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:20:59.867081 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:20:59.867646 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:20:59.871696 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:20:59.875235 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 6 00:20:59.875914 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 6 00:20:59.876996 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:20:59.880654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:20:59.890355 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:20:59.895951 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:20:59.933695 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:20:59.963428 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:20:59.973999 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:20:59.981926 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:20:59.988952 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:21:00.026705 kernel: loop0: detected capacity change from 0 to 8 Nov 6 00:21:00.058538 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:21:00.071386 systemd-journald[1155]: Time spent on flushing to /var/log/journal/26fc2701f5e54c569b8045bc030ced02 is 108.388ms for 1028 entries. Nov 6 00:21:00.071386 systemd-journald[1155]: System Journal (/var/log/journal/26fc2701f5e54c569b8045bc030ced02) is 8M, max 195.6M, 187.6M free. Nov 6 00:21:00.223230 systemd-journald[1155]: Received client request to flush runtime journal. Nov 6 00:21:00.223331 kernel: loop1: detected capacity change from 0 to 224512 Nov 6 00:21:00.223368 kernel: loop2: detected capacity change from 0 to 128016 Nov 6 00:21:00.109744 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:21:00.154524 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:21:00.165213 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:21:00.168887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:21:00.218129 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Nov 6 00:21:00.218149 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Nov 6 00:21:00.227331 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:21:00.236623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:21:00.255606 kernel: loop3: detected capacity change from 0 to 110984 Nov 6 00:21:00.303168 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:21:00.315619 kernel: loop4: detected capacity change from 0 to 8 Nov 6 00:21:00.323583 kernel: loop5: detected capacity change from 0 to 224512 Nov 6 00:21:00.345607 kernel: loop6: detected capacity change from 0 to 128016 Nov 6 00:21:00.364590 kernel: loop7: detected capacity change from 0 to 110984 Nov 6 00:21:00.379063 (sd-merge)[1240]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 6 00:21:00.381267 (sd-merge)[1240]: Merged extensions into '/usr'. Nov 6 00:21:00.394207 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:21:00.394228 systemd[1]: Reloading... Nov 6 00:21:00.607788 zram_generator::config[1263]: No configuration found. Nov 6 00:21:00.723134 ldconfig[1182]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:21:00.939702 systemd[1]: Reloading finished in 544 ms. Nov 6 00:21:00.957368 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:21:00.960104 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:21:00.981844 systemd[1]: Starting ensure-sysext.service... Nov 6 00:21:00.984298 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:21:00.997319 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:21:01.004160 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:21:01.006199 systemd[1]: Reload requested from client PID 1309 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:21:01.006214 systemd[1]: Reloading... Nov 6 00:21:01.035961 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:21:01.036453 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:21:01.036861 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:21:01.037420 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:21:01.040486 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:21:01.041488 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Nov 6 00:21:01.041729 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Nov 6 00:21:01.046348 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:21:01.046363 systemd-tmpfiles[1310]: Skipping /boot Nov 6 00:21:01.064466 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:21:01.065748 systemd-tmpfiles[1310]: Skipping /boot Nov 6 00:21:01.071982 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Nov 6 00:21:01.120620 zram_generator::config[1335]: No configuration found. Nov 6 00:21:01.482386 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:21:01.482917 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:21:01.484845 systemd[1]: Reloading finished in 478 ms. Nov 6 00:21:01.500065 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:21:01.497829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:21:01.512483 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:21:01.532601 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:21:01.561717 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 6 00:21:01.565788 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:21:01.564987 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 6 00:21:01.566028 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:01.568894 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:21:01.576699 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:21:01.577923 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:21:01.580763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:21:01.593066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:21:01.598665 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:21:01.600647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:21:01.601874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:21:01.607027 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:21:01.608987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:21:01.612685 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:21:01.628261 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:21:01.642466 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:21:01.661595 kernel: ISO 9660 Extensions: RRIP_1991A Nov 6 00:21:01.654364 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:21:01.655176 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:01.663098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:01.663284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:21:01.663514 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:21:01.663638 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:21:01.663721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:01.668759 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:01.669054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:21:01.671813 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:21:01.672873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:21:01.673034 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:21:01.673203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:21:01.687857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:21:01.694334 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 6 00:21:01.726691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:21:01.727131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:21:01.728382 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:21:01.737999 systemd[1]: Finished ensure-sysext.service. Nov 6 00:21:01.750654 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:21:01.752091 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:21:01.752640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:21:01.757298 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:21:01.759605 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:21:01.766925 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:21:01.771027 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:21:01.776925 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:21:01.777197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:21:01.802180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:21:01.802516 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:21:01.803893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:21:01.823807 augenrules[1475]: No rules Nov 6 00:21:01.826359 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:21:01.826777 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:21:01.874253 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:21:01.879976 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:21:01.882122 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:21:01.928195 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:21:01.978711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:21:02.010665 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 6 00:21:02.010751 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 6 00:21:02.021514 kernel: Console: switching to colour dummy device 80x25 Nov 6 00:21:02.021693 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 6 00:21:02.021725 kernel: [drm] features: -context_init Nov 6 00:21:02.024603 kernel: [drm] number of scanouts: 1 Nov 6 00:21:02.024687 kernel: [drm] number of cap sets: 0 Nov 6 00:21:02.030593 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 6 00:21:02.052592 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 6 00:21:02.054163 kernel: Console: switching to colour frame buffer device 128x48 Nov 6 00:21:02.061592 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 6 00:21:02.085890 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:21:02.086616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:21:02.139140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:21:02.246597 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:21:02.278444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:21:02.332348 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:21:02.334494 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:21:02.342241 systemd-resolved[1437]: Positive Trust Anchors: Nov 6 00:21:02.342750 systemd-resolved[1437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:21:02.342835 systemd-resolved[1437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:21:02.352266 systemd-resolved[1437]: Using system hostname 'ci-4459.1.0-n-800cd2f73d'. Nov 6 00:21:02.354519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:21:02.357741 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:21:02.357925 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:21:02.358165 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:21:02.358285 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:21:02.358365 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:21:02.359492 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:21:02.360026 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:21:02.360168 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:21:02.360263 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:21:02.360304 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:21:02.360385 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:21:02.362533 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:21:02.364166 systemd-networkd[1436]: lo: Link UP Nov 6 00:21:02.364181 systemd-networkd[1436]: lo: Gained carrier Nov 6 00:21:02.366493 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:21:02.367591 systemd-timesyncd[1456]: No network connectivity, watching for changes. Nov 6 00:21:02.367835 systemd-networkd[1436]: Enumeration completed Nov 6 00:21:02.372120 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:21:02.373918 systemd-networkd[1436]: eth0: Configuring with /run/systemd/network/10-42:dc:2f:29:0f:dd.network. Nov 6 00:21:02.375114 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:21:02.378279 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:21:02.379683 systemd-networkd[1436]: eth1: Configuring with /run/systemd/network/10-12:9a:c5:69:df:30.network. Nov 6 00:21:02.382415 systemd-networkd[1436]: eth0: Link UP Nov 6 00:21:02.382783 systemd-networkd[1436]: eth0: Gained carrier Nov 6 00:21:02.388679 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:21:02.390098 systemd-networkd[1436]: eth1: Link UP Nov 6 00:21:02.390104 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:21:02.391766 systemd-networkd[1436]: eth1: Gained carrier Nov 6 00:21:02.391875 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:21:02.394736 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:21:02.398018 systemd[1]: Reached target network.target - Network. Nov 6 00:21:02.400455 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:21:02.400805 systemd-timesyncd[1456]: Network configuration changed, trying to establish connection. Nov 6 00:21:02.401054 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:21:02.403647 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:21:02.403691 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:21:02.406019 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:21:02.412822 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:21:02.426759 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:21:02.431800 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:21:02.436137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:21:02.442754 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:21:02.444181 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:21:02.451862 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:21:02.453759 jq[1513]: false Nov 6 00:21:02.460853 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:21:02.469071 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:21:02.483644 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:21:02.488835 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:21:02.503846 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:21:02.509919 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:21:02.511006 coreos-metadata[1510]: Nov 06 00:21:02.510 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 00:21:02.514276 systemd-timesyncd[1456]: Contacted time server 134.215.155.177:123 (3.flatcar.pool.ntp.org). Nov 6 00:21:02.514388 systemd-timesyncd[1456]: Initial clock synchronization to Thu 2025-11-06 00:21:02.751373 UTC. Nov 6 00:21:02.522818 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:21:02.526501 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:21:02.527221 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:21:02.529799 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing passwd entry cache Nov 6 00:21:02.529898 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:21:02.530624 oslogin_cache_refresh[1517]: Refreshing passwd entry cache Nov 6 00:21:02.537287 extend-filesystems[1516]: Found /dev/vda6 Nov 6 00:21:02.544851 coreos-metadata[1510]: Nov 06 00:21:02.533 INFO Fetch successful Nov 6 00:21:02.539132 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:21:02.560593 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting users, quitting Nov 6 00:21:02.560593 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:21:02.560593 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing group entry cache Nov 6 00:21:02.559876 oslogin_cache_refresh[1517]: Failure getting users, quitting Nov 6 00:21:02.559902 oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:21:02.559969 oslogin_cache_refresh[1517]: Refreshing group entry cache Nov 6 00:21:02.561872 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:21:02.565632 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting groups, quitting Nov 6 00:21:02.565632 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:21:02.565229 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:21:02.564026 oslogin_cache_refresh[1517]: Failure getting groups, quitting Nov 6 00:21:02.565529 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:21:02.564047 oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:21:02.567255 extend-filesystems[1516]: Found /dev/vda9 Nov 6 00:21:02.570476 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:21:02.570839 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:21:02.576886 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:21:02.578773 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:21:02.582247 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:21:02.583608 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:21:02.588374 extend-filesystems[1516]: Checking size of /dev/vda9 Nov 6 00:21:02.663616 extend-filesystems[1516]: Resized partition /dev/vda9 Nov 6 00:21:02.691655 jq[1536]: true Nov 6 00:21:02.689310 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:21:02.695804 update_engine[1534]: I20251106 00:21:02.684970 1534 main.cc:92] Flatcar Update Engine starting Nov 6 00:21:02.695804 update_engine[1534]: I20251106 00:21:02.695410 1534 update_check_scheduler.cc:74] Next update check in 4m39s Nov 6 00:21:02.688527 dbus-daemon[1511]: [system] SELinux support is enabled Nov 6 00:21:02.696516 extend-filesystems[1556]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:21:02.705349 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:21:02.705439 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:21:02.708798 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:21:02.708993 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 6 00:21:02.709031 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:21:02.716238 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:21:02.743609 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 6 00:21:02.750254 jq[1560]: true Nov 6 00:21:02.750078 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:21:02.755282 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:21:02.766104 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:21:02.786665 tar[1540]: linux-amd64/LICENSE Nov 6 00:21:02.786665 tar[1540]: linux-amd64/helm Nov 6 00:21:02.832670 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:21:02.834339 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:21:02.955922 systemd-logind[1528]: New seat seat0. Nov 6 00:21:02.959880 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:21:02.959909 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:21:02.960177 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:21:02.980157 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:21:03.027616 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 6 00:21:03.043776 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 00:21:03.043776 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 6 00:21:03.043776 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 6 00:21:03.073286 extend-filesystems[1516]: Resized filesystem in /dev/vda9 Nov 6 00:21:03.082689 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:21:03.046766 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:21:03.047205 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:21:03.053845 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:21:03.074597 systemd[1]: Starting sshkeys.service... Nov 6 00:21:03.153156 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 00:21:03.160346 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 00:21:03.265727 coreos-metadata[1599]: Nov 06 00:21:03.265 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 6 00:21:03.282661 coreos-metadata[1599]: Nov 06 00:21:03.281 INFO Fetch successful Nov 6 00:21:03.300720 unknown[1599]: wrote ssh authorized keys file for user: core Nov 6 00:21:03.358261 update-ssh-keys[1603]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:21:03.358084 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 00:21:03.366230 systemd[1]: Finished sshkeys.service. Nov 6 00:21:03.394799 containerd[1558]: time="2025-11-06T00:21:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:21:03.395140 containerd[1558]: time="2025-11-06T00:21:03.394855295Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:21:03.415333 containerd[1558]: time="2025-11-06T00:21:03.414352584Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.663µs" Nov 6 00:21:03.415333 containerd[1558]: time="2025-11-06T00:21:03.414442409Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:21:03.415333 containerd[1558]: time="2025-11-06T00:21:03.414490838Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:21:03.415333 containerd[1558]: time="2025-11-06T00:21:03.414852362Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:21:03.415333 containerd[1558]: time="2025-11-06T00:21:03.414888890Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:21:03.415333 containerd[1558]: time="2025-11-06T00:21:03.414942059Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:21:03.415333 containerd[1558]: time="2025-11-06T00:21:03.415041032Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:21:03.415333 containerd[1558]: time="2025-11-06T00:21:03.415067217Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:21:03.415734 containerd[1558]: time="2025-11-06T00:21:03.415482413Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:21:03.415734 containerd[1558]: time="2025-11-06T00:21:03.415510999Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:21:03.415734 containerd[1558]: time="2025-11-06T00:21:03.415538144Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:21:03.415734 containerd[1558]: time="2025-11-06T00:21:03.415561256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:21:03.415887 containerd[1558]: time="2025-11-06T00:21:03.415739614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:21:03.416622 containerd[1558]: time="2025-11-06T00:21:03.416082090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:21:03.416622 containerd[1558]: time="2025-11-06T00:21:03.416146208Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:21:03.416622 containerd[1558]: time="2025-11-06T00:21:03.416164651Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:21:03.416622 containerd[1558]: time="2025-11-06T00:21:03.416231444Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:21:03.421115 containerd[1558]: time="2025-11-06T00:21:03.420229010Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:21:03.421115 containerd[1558]: time="2025-11-06T00:21:03.420461202Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.429766868Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.429884124Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.429910163Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430017849Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430041225Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430059690Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430074399Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430089013Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430101089Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430164491Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430183893Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430198083Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430398539Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:21:03.431565 containerd[1558]: time="2025-11-06T00:21:03.430418759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430437640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430459185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430485534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430499929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430514334Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430539313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430556224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430567931Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430612523Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430745106Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430771401Z" level=info msg="Start snapshots syncer" Nov 6 00:21:03.431970 containerd[1558]: time="2025-11-06T00:21:03.430814331Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:21:03.432356 containerd[1558]: time="2025-11-06T00:21:03.431161675Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:21:03.432356 containerd[1558]: time="2025-11-06T00:21:03.431232190Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431298581Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431465999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431488454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431500155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431512514Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431530158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431548175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431567824Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431721853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431742756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431804488Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431860451Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431880111Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:21:03.432558 containerd[1558]: time="2025-11-06T00:21:03.431890075Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.431901603Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.431950148Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.431968163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.432000008Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.432035202Z" level=info msg="runtime interface created" Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.432045080Z" level=info msg="created NRI interface" Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.432057759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.432076218Z" level=info msg="Connect containerd service" Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.432114343Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:21:03.433729 containerd[1558]: time="2025-11-06T00:21:03.433492816Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:21:03.465559 sshd_keygen[1546]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:21:03.507886 systemd-networkd[1436]: eth1: Gained IPv6LL Nov 6 00:21:03.514398 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:21:03.518717 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:21:03.526386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:21:03.531430 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:21:03.600757 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:21:03.611220 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:21:03.655631 containerd[1558]: time="2025-11-06T00:21:03.655536489Z" level=info msg="Start subscribing containerd event" Nov 6 00:21:03.655784 containerd[1558]: time="2025-11-06T00:21:03.655643872Z" level=info msg="Start recovering state" Nov 6 00:21:03.655826 containerd[1558]: time="2025-11-06T00:21:03.655802861Z" level=info msg="Start event monitor" Nov 6 00:21:03.655826 containerd[1558]: time="2025-11-06T00:21:03.655817996Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:21:03.655826 containerd[1558]: time="2025-11-06T00:21:03.655825711Z" level=info msg="Start streaming server" Nov 6 00:21:03.655922 containerd[1558]: time="2025-11-06T00:21:03.655842197Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:21:03.655922 containerd[1558]: time="2025-11-06T00:21:03.655850325Z" level=info msg="runtime interface starting up..." Nov 6 00:21:03.655922 containerd[1558]: time="2025-11-06T00:21:03.655870996Z" level=info msg="starting plugins..." Nov 6 00:21:03.655922 containerd[1558]: time="2025-11-06T00:21:03.655884533Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:21:03.657303 containerd[1558]: time="2025-11-06T00:21:03.657257081Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:21:03.657687 containerd[1558]: time="2025-11-06T00:21:03.657618901Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:21:03.659718 containerd[1558]: time="2025-11-06T00:21:03.658753358Z" level=info msg="containerd successfully booted in 0.265717s" Nov 6 00:21:03.658994 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:21:03.675497 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:21:03.682076 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:21:03.682374 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:21:03.688308 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:21:03.745933 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:21:03.753231 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:21:03.757165 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:21:03.760039 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:21:03.904994 tar[1540]: linux-amd64/README.md Nov 6 00:21:03.929047 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:21:04.083783 systemd-networkd[1436]: eth0: Gained IPv6LL Nov 6 00:21:04.494266 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:21:04.497964 systemd[1]: Started sshd@0-147.182.203.34:22-139.178.68.195:59162.service - OpenSSH per-connection server daemon (139.178.68.195:59162). Nov 6 00:21:04.612747 sshd[1653]: Accepted publickey for core from 139.178.68.195 port 59162 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:04.614813 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:04.625298 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:21:04.629385 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:21:04.650939 systemd-logind[1528]: New session 1 of user core. Nov 6 00:21:04.675658 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:21:04.685805 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:21:04.703540 (systemd)[1658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:21:04.711892 systemd-logind[1528]: New session c1 of user core. Nov 6 00:21:04.880241 systemd[1658]: Queued start job for default target default.target. Nov 6 00:21:04.887050 systemd[1658]: Created slice app.slice - User Application Slice. Nov 6 00:21:04.887301 systemd[1658]: Reached target paths.target - Paths. Nov 6 00:21:04.887542 systemd[1658]: Reached target timers.target - Timers. Nov 6 00:21:04.892819 systemd[1658]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:21:04.915904 systemd[1658]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:21:04.916485 systemd[1658]: Reached target sockets.target - Sockets. Nov 6 00:21:04.916811 systemd[1658]: Reached target basic.target - Basic System. Nov 6 00:21:04.916990 systemd[1658]: Reached target default.target - Main User Target. Nov 6 00:21:04.917199 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:21:04.917479 systemd[1658]: Startup finished in 193ms. Nov 6 00:21:04.929068 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:21:04.981744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:21:04.985416 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:21:04.987328 systemd[1]: Startup finished in 3.871s (kernel) + 7.841s (initrd) + 6.542s (userspace) = 18.255s. Nov 6 00:21:04.995986 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:21:05.014054 systemd[1]: Started sshd@1-147.182.203.34:22-139.178.68.195:59168.service - OpenSSH per-connection server daemon (139.178.68.195:59168). Nov 6 00:21:05.133396 sshd[1676]: Accepted publickey for core from 139.178.68.195 port 59168 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:05.136344 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:05.148606 systemd-logind[1528]: New session 2 of user core. Nov 6 00:21:05.152909 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:21:05.228668 sshd[1683]: Connection closed by 139.178.68.195 port 59168 Nov 6 00:21:05.229226 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:05.244798 systemd[1]: sshd@1-147.182.203.34:22-139.178.68.195:59168.service: Deactivated successfully. Nov 6 00:21:05.247175 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:21:05.249191 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:21:05.254921 systemd[1]: Started sshd@2-147.182.203.34:22-139.178.68.195:59176.service - OpenSSH per-connection server daemon (139.178.68.195:59176). Nov 6 00:21:05.257695 systemd-logind[1528]: Removed session 2. Nov 6 00:21:05.340517 sshd[1692]: Accepted publickey for core from 139.178.68.195 port 59176 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:05.340933 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:05.348852 systemd-logind[1528]: New session 3 of user core. Nov 6 00:21:05.354946 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:21:05.418148 sshd[1695]: Connection closed by 139.178.68.195 port 59176 Nov 6 00:21:05.421542 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:05.432302 systemd[1]: sshd@2-147.182.203.34:22-139.178.68.195:59176.service: Deactivated successfully. Nov 6 00:21:05.436385 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:21:05.439802 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:21:05.443870 systemd[1]: Started sshd@3-147.182.203.34:22-139.178.68.195:59182.service - OpenSSH per-connection server daemon (139.178.68.195:59182). Nov 6 00:21:05.445705 systemd-logind[1528]: Removed session 3. Nov 6 00:21:05.533286 sshd[1701]: Accepted publickey for core from 139.178.68.195 port 59182 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:05.535357 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:05.547541 systemd-logind[1528]: New session 4 of user core. Nov 6 00:21:05.551112 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:21:05.635917 sshd[1704]: Connection closed by 139.178.68.195 port 59182 Nov 6 00:21:05.637904 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:05.651991 systemd[1]: sshd@3-147.182.203.34:22-139.178.68.195:59182.service: Deactivated successfully. Nov 6 00:21:05.655234 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:21:05.657703 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:21:05.662841 systemd[1]: Started sshd@4-147.182.203.34:22-139.178.68.195:59186.service - OpenSSH per-connection server daemon (139.178.68.195:59186). Nov 6 00:21:05.665684 systemd-logind[1528]: Removed session 4. Nov 6 00:21:05.754969 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 59186 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:05.757653 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:05.764896 systemd-logind[1528]: New session 5 of user core. Nov 6 00:21:05.771965 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:21:05.845119 kubelet[1672]: E1106 00:21:05.845028 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:21:05.848483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:21:05.848841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:21:05.850249 systemd[1]: kubelet.service: Consumed 1.576s CPU time, 263.8M memory peak. Nov 6 00:21:05.861310 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:21:05.861880 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:21:05.879584 sudo[1715]: pam_unix(sudo:session): session closed for user root Nov 6 00:21:05.883838 sshd[1714]: Connection closed by 139.178.68.195 port 59186 Nov 6 00:21:05.885035 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:05.896383 systemd[1]: sshd@4-147.182.203.34:22-139.178.68.195:59186.service: Deactivated successfully. Nov 6 00:21:05.899063 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:21:05.900284 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:21:05.905428 systemd[1]: Started sshd@5-147.182.203.34:22-139.178.68.195:59192.service - OpenSSH per-connection server daemon (139.178.68.195:59192). Nov 6 00:21:05.906963 systemd-logind[1528]: Removed session 5. Nov 6 00:21:05.991505 sshd[1722]: Accepted publickey for core from 139.178.68.195 port 59192 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:05.993646 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:06.000941 systemd-logind[1528]: New session 6 of user core. Nov 6 00:21:06.013111 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:21:06.081442 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:21:06.082239 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:21:06.091930 sudo[1727]: pam_unix(sudo:session): session closed for user root Nov 6 00:21:06.100136 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:21:06.100874 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:21:06.116377 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:21:06.168375 augenrules[1749]: No rules Nov 6 00:21:06.170370 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:21:06.170896 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:21:06.173919 sudo[1726]: pam_unix(sudo:session): session closed for user root Nov 6 00:21:06.177172 sshd[1725]: Connection closed by 139.178.68.195 port 59192 Nov 6 00:21:06.178147 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:06.197904 systemd[1]: sshd@5-147.182.203.34:22-139.178.68.195:59192.service: Deactivated successfully. Nov 6 00:21:06.200258 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:21:06.201724 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:21:06.206207 systemd[1]: Started sshd@6-147.182.203.34:22-139.178.68.195:59196.service - OpenSSH per-connection server daemon (139.178.68.195:59196). Nov 6 00:21:06.207063 systemd-logind[1528]: Removed session 6. Nov 6 00:21:06.283304 sshd[1758]: Accepted publickey for core from 139.178.68.195 port 59196 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:21:06.285256 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:21:06.291955 systemd-logind[1528]: New session 7 of user core. Nov 6 00:21:06.300991 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:21:06.366036 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:21:06.366909 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:21:07.003060 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:21:07.024197 (dockerd)[1780]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:21:07.504346 dockerd[1780]: time="2025-11-06T00:21:07.504206532Z" level=info msg="Starting up" Nov 6 00:21:07.507612 dockerd[1780]: time="2025-11-06T00:21:07.506687101Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:21:07.529830 dockerd[1780]: time="2025-11-06T00:21:07.529759457Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:21:07.781658 dockerd[1780]: time="2025-11-06T00:21:07.781187593Z" level=info msg="Loading containers: start." Nov 6 00:21:07.797612 kernel: Initializing XFRM netlink socket Nov 6 00:21:08.184744 systemd-networkd[1436]: docker0: Link UP Nov 6 00:21:08.189404 dockerd[1780]: time="2025-11-06T00:21:08.189261436Z" level=info msg="Loading containers: done." Nov 6 00:21:08.214322 dockerd[1780]: time="2025-11-06T00:21:08.213851591Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:21:08.214322 dockerd[1780]: time="2025-11-06T00:21:08.213974184Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:21:08.214322 dockerd[1780]: time="2025-11-06T00:21:08.214094588Z" level=info msg="Initializing buildkit" Nov 6 00:21:08.255445 dockerd[1780]: time="2025-11-06T00:21:08.255376461Z" level=info msg="Completed buildkit initialization" Nov 6 00:21:08.265200 dockerd[1780]: time="2025-11-06T00:21:08.265134922Z" level=info msg="Daemon has completed initialization" Nov 6 00:21:08.265387 dockerd[1780]: time="2025-11-06T00:21:08.265259545Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:21:08.265758 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:21:09.383589 containerd[1558]: time="2025-11-06T00:21:09.383511457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 6 00:21:09.917372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067115716.mount: Deactivated successfully. Nov 6 00:21:11.526171 containerd[1558]: time="2025-11-06T00:21:11.526087420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:11.527463 containerd[1558]: time="2025-11-06T00:21:11.527257305Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 6 00:21:11.528322 containerd[1558]: time="2025-11-06T00:21:11.528282748Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:11.532725 containerd[1558]: time="2025-11-06T00:21:11.532657976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:11.533693 containerd[1558]: time="2025-11-06T00:21:11.533643442Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.150079756s" Nov 6 00:21:11.533932 containerd[1558]: time="2025-11-06T00:21:11.533702194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 6 00:21:11.534508 containerd[1558]: time="2025-11-06T00:21:11.534479092Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 6 00:21:13.193943 containerd[1558]: time="2025-11-06T00:21:13.193846762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:13.195617 containerd[1558]: time="2025-11-06T00:21:13.195213041Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 6 00:21:13.196473 containerd[1558]: time="2025-11-06T00:21:13.196422429Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:13.201707 containerd[1558]: time="2025-11-06T00:21:13.200490506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:13.201707 containerd[1558]: time="2025-11-06T00:21:13.201523521Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.667009575s" Nov 6 00:21:13.201707 containerd[1558]: time="2025-11-06T00:21:13.201580083Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 6 00:21:13.203297 containerd[1558]: time="2025-11-06T00:21:13.203248166Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 6 00:21:14.505609 containerd[1558]: time="2025-11-06T00:21:14.505413672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:14.508384 containerd[1558]: time="2025-11-06T00:21:14.508323768Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 6 00:21:14.509170 containerd[1558]: time="2025-11-06T00:21:14.509116104Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:14.513246 containerd[1558]: time="2025-11-06T00:21:14.513150740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:14.516095 containerd[1558]: time="2025-11-06T00:21:14.515881869Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.312419288s" Nov 6 00:21:14.516095 containerd[1558]: time="2025-11-06T00:21:14.515953001Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 6 00:21:14.516697 containerd[1558]: time="2025-11-06T00:21:14.516668062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 6 00:21:15.765526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262849823.mount: Deactivated successfully. Nov 6 00:21:15.855130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:21:15.859509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:21:16.068270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:21:16.085114 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:21:16.178358 kubelet[2079]: E1106 00:21:16.178251 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:21:16.185055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:21:16.185217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:21:16.186129 systemd[1]: kubelet.service: Consumed 243ms CPU time, 111.3M memory peak. Nov 6 00:21:16.659716 containerd[1558]: time="2025-11-06T00:21:16.659640341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:16.660891 containerd[1558]: time="2025-11-06T00:21:16.660829473Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 6 00:21:16.661978 containerd[1558]: time="2025-11-06T00:21:16.661886447Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:16.663958 containerd[1558]: time="2025-11-06T00:21:16.663922735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:16.664772 containerd[1558]: time="2025-11-06T00:21:16.664311103Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.147523328s" Nov 6 00:21:16.664772 containerd[1558]: time="2025-11-06T00:21:16.664350300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 6 00:21:16.665081 containerd[1558]: time="2025-11-06T00:21:16.665047369Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 6 00:21:16.666661 systemd-resolved[1437]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 6 00:21:17.267039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935491118.mount: Deactivated successfully. Nov 6 00:21:18.417678 containerd[1558]: time="2025-11-06T00:21:18.417617611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:18.420128 containerd[1558]: time="2025-11-06T00:21:18.420061061Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 6 00:21:18.420665 containerd[1558]: time="2025-11-06T00:21:18.420626054Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:18.428283 containerd[1558]: time="2025-11-06T00:21:18.428210101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:18.430222 containerd[1558]: time="2025-11-06T00:21:18.430147403Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.765039116s" Nov 6 00:21:18.430483 containerd[1558]: time="2025-11-06T00:21:18.430463740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 6 00:21:18.431141 containerd[1558]: time="2025-11-06T00:21:18.431107618Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:21:18.978045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195729583.mount: Deactivated successfully. Nov 6 00:21:18.986191 containerd[1558]: time="2025-11-06T00:21:18.986089382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:21:18.987458 containerd[1558]: time="2025-11-06T00:21:18.987407319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:21:18.988890 containerd[1558]: time="2025-11-06T00:21:18.988334421Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:21:18.997851 containerd[1558]: time="2025-11-06T00:21:18.997777468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:21:18.999245 containerd[1558]: time="2025-11-06T00:21:18.999182941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 567.734216ms" Nov 6 00:21:18.999245 containerd[1558]: time="2025-11-06T00:21:18.999229482Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:21:19.000297 containerd[1558]: time="2025-11-06T00:21:19.000258895Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 6 00:21:19.688116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968075278.mount: Deactivated successfully. Nov 6 00:21:19.762901 systemd-resolved[1437]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 6 00:21:21.776881 containerd[1558]: time="2025-11-06T00:21:21.776784967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:21.778588 containerd[1558]: time="2025-11-06T00:21:21.778516789Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 6 00:21:21.779050 containerd[1558]: time="2025-11-06T00:21:21.779004050Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:21.783463 containerd[1558]: time="2025-11-06T00:21:21.783389847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:21.786125 containerd[1558]: time="2025-11-06T00:21:21.786000289Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.785693948s" Nov 6 00:21:21.786500 containerd[1558]: time="2025-11-06T00:21:21.786090535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 6 00:21:25.582284 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:21:25.582460 systemd[1]: kubelet.service: Consumed 243ms CPU time, 111.3M memory peak. Nov 6 00:21:25.588666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:21:25.629033 systemd[1]: Reload requested from client PID 2224 ('systemctl') (unit session-7.scope)... Nov 6 00:21:25.629064 systemd[1]: Reloading... Nov 6 00:21:25.783640 zram_generator::config[2267]: No configuration found. Nov 6 00:21:26.147215 systemd[1]: Reloading finished in 517 ms. Nov 6 00:21:26.233895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:21:26.240955 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:21:26.244479 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:21:26.244920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:21:26.244994 systemd[1]: kubelet.service: Consumed 145ms CPU time, 98.2M memory peak. Nov 6 00:21:26.247806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:21:26.437107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:21:26.451169 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:21:26.530780 kubelet[2322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:21:26.530780 kubelet[2322]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:21:26.530780 kubelet[2322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:21:26.531301 kubelet[2322]: I1106 00:21:26.530880 2322 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:21:26.917328 kubelet[2322]: I1106 00:21:26.917267 2322 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 00:21:26.918587 kubelet[2322]: I1106 00:21:26.917545 2322 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:21:26.918587 kubelet[2322]: I1106 00:21:26.918038 2322 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 00:21:26.949527 kubelet[2322]: E1106 00:21:26.949134 2322 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.182.203.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.182.203.34:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:21:26.951164 kubelet[2322]: I1106 00:21:26.951115 2322 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:21:26.964576 kubelet[2322]: I1106 00:21:26.964512 2322 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:21:26.969216 kubelet[2322]: I1106 00:21:26.969170 2322 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:21:26.971500 kubelet[2322]: I1106 00:21:26.971375 2322 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:21:26.971750 kubelet[2322]: I1106 00:21:26.971462 2322 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-800cd2f73d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:21:26.973854 kubelet[2322]: I1106 00:21:26.973760 2322 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:21:26.973854 kubelet[2322]: I1106 00:21:26.973850 2322 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 00:21:26.975451 kubelet[2322]: I1106 00:21:26.975378 2322 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:21:26.980145 kubelet[2322]: I1106 00:21:26.979285 2322 kubelet.go:446] "Attempting to sync node with API server" Nov 6 00:21:26.980145 kubelet[2322]: I1106 00:21:26.979498 2322 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:21:26.980438 kubelet[2322]: I1106 00:21:26.980200 2322 kubelet.go:352] "Adding apiserver pod source" Nov 6 00:21:26.980438 kubelet[2322]: I1106 00:21:26.980233 2322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:21:26.993321 kubelet[2322]: W1106 00:21:26.992347 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.182.203.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-800cd2f73d&limit=500&resourceVersion=0": dial tcp 147.182.203.34:6443: connect: connection refused Nov 6 00:21:26.993321 kubelet[2322]: E1106 00:21:26.992452 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.182.203.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.1.0-n-800cd2f73d&limit=500&resourceVersion=0\": dial tcp 147.182.203.34:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:21:26.993321 kubelet[2322]: W1106 00:21:26.993171 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.203.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.203.34:6443: connect: connection refused Nov 6 00:21:26.993321 kubelet[2322]: E1106 00:21:26.993239 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.203.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.203.34:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:21:26.995612 kubelet[2322]: I1106 00:21:26.995224 2322 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:21:26.999489 kubelet[2322]: I1106 00:21:26.999439 2322 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 00:21:27.001228 kubelet[2322]: W1106 00:21:27.000447 2322 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:21:27.002853 kubelet[2322]: I1106 00:21:27.002151 2322 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:21:27.002853 kubelet[2322]: I1106 00:21:27.002211 2322 server.go:1287] "Started kubelet" Nov 6 00:21:27.006699 kubelet[2322]: I1106 00:21:27.005075 2322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:21:27.010362 kubelet[2322]: I1106 00:21:27.010191 2322 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:21:27.014653 kubelet[2322]: I1106 00:21:27.014548 2322 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:21:27.014961 kubelet[2322]: E1106 00:21:27.014934 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" Nov 6 00:21:27.015940 kubelet[2322]: I1106 00:21:27.015894 2322 server.go:479] "Adding debug handlers to kubelet server" Nov 6 00:21:27.017685 kubelet[2322]: I1106 00:21:27.017611 2322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:21:27.018125 kubelet[2322]: I1106 00:21:27.018104 2322 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:21:27.019294 kubelet[2322]: I1106 00:21:27.018652 2322 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:21:27.019294 kubelet[2322]: I1106 00:21:27.018737 2322 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:21:27.029000 kubelet[2322]: I1106 00:21:27.028145 2322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:21:27.033444 kubelet[2322]: I1106 00:21:27.031326 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 00:21:27.033444 kubelet[2322]: I1106 00:21:27.033335 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 00:21:27.034223 kubelet[2322]: E1106 00:21:27.033987 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.203.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-800cd2f73d?timeout=10s\": dial tcp 147.182.203.34:6443: connect: connection refused" interval="200ms" Nov 6 00:21:27.034223 kubelet[2322]: W1106 00:21:27.034200 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.182.203.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.182.203.34:6443: connect: connection refused Nov 6 00:21:27.034349 kubelet[2322]: E1106 00:21:27.034264 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.182.203.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.182.203.34:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:21:27.035248 kubelet[2322]: I1106 00:21:27.035215 2322 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 00:21:27.035343 kubelet[2322]: I1106 00:21:27.035272 2322 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:21:27.035343 kubelet[2322]: I1106 00:21:27.035281 2322 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 00:21:27.035391 kubelet[2322]: E1106 00:21:27.035360 2322 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:21:27.038543 kubelet[2322]: E1106 00:21:27.036992 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.203.34:6443/api/v1/namespaces/default/events\": dial tcp 147.182.203.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-800cd2f73d.187542fdcc00c995 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-800cd2f73d,UID:ci-4459.1.0-n-800cd2f73d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-800cd2f73d,},FirstTimestamp:2025-11-06 00:21:27.002179989 +0000 UTC m=+0.545074512,LastTimestamp:2025-11-06 00:21:27.002179989 +0000 UTC m=+0.545074512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-800cd2f73d,}" Nov 6 00:21:27.039428 kubelet[2322]: I1106 00:21:27.039198 2322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:21:27.041196 kubelet[2322]: I1106 00:21:27.041170 2322 factory.go:221] Registration of the containerd container factory successfully Nov 6 00:21:27.041196 kubelet[2322]: I1106 00:21:27.041191 2322 factory.go:221] Registration of the systemd container factory successfully Nov 6 00:21:27.051877 kubelet[2322]: E1106 00:21:27.051841 2322 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:21:27.055113 kubelet[2322]: W1106 00:21:27.055026 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.203.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.203.34:6443: connect: connection refused Nov 6 00:21:27.055113 kubelet[2322]: E1106 00:21:27.055123 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.203.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.203.34:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:21:27.060310 kubelet[2322]: E1106 00:21:27.060134 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.182.203.34:6443/api/v1/namespaces/default/events\": dial tcp 147.182.203.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.1.0-n-800cd2f73d.187542fdcc00c995 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.1.0-n-800cd2f73d,UID:ci-4459.1.0-n-800cd2f73d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.1.0-n-800cd2f73d,},FirstTimestamp:2025-11-06 00:21:27.002179989 +0000 UTC m=+0.545074512,LastTimestamp:2025-11-06 00:21:27.002179989 +0000 UTC m=+0.545074512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.1.0-n-800cd2f73d,}" Nov 6 00:21:27.066702 kubelet[2322]: I1106 00:21:27.066302 2322 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:21:27.066702 kubelet[2322]: I1106 00:21:27.066333 2322 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:21:27.066702 kubelet[2322]: I1106 00:21:27.066364 2322 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:21:27.069728 kubelet[2322]: I1106 00:21:27.069684 2322 policy_none.go:49] "None policy: Start" Nov 6 00:21:27.069942 kubelet[2322]: I1106 00:21:27.069929 2322 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:21:27.070020 kubelet[2322]: I1106 00:21:27.070012 2322 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:21:27.080027 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:21:27.095684 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:21:27.104131 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:21:27.114508 kubelet[2322]: I1106 00:21:27.114447 2322 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 00:21:27.115400 kubelet[2322]: E1106 00:21:27.115008 2322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" Nov 6 00:21:27.115684 kubelet[2322]: I1106 00:21:27.115516 2322 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:21:27.115841 kubelet[2322]: I1106 00:21:27.115547 2322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:21:27.116222 kubelet[2322]: I1106 00:21:27.116191 2322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:21:27.118728 kubelet[2322]: E1106 00:21:27.118462 2322 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:21:27.118728 kubelet[2322]: E1106 00:21:27.118509 2322 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.1.0-n-800cd2f73d\" not found" Nov 6 00:21:27.149813 systemd[1]: Created slice kubepods-burstable-pod22cd155fe1f673d0fc16330f44352da3.slice - libcontainer container kubepods-burstable-pod22cd155fe1f673d0fc16330f44352da3.slice. Nov 6 00:21:27.177681 kubelet[2322]: E1106 00:21:27.176465 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.186869 systemd[1]: Created slice kubepods-burstable-podb23a8bda8aa165496e43dd9470fd1016.slice - libcontainer container kubepods-burstable-podb23a8bda8aa165496e43dd9470fd1016.slice. Nov 6 00:21:27.191234 kubelet[2322]: E1106 00:21:27.191159 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.194318 systemd[1]: Created slice kubepods-burstable-poddd76df1f2f606a0846991ff43724f78f.slice - libcontainer container kubepods-burstable-poddd76df1f2f606a0846991ff43724f78f.slice. Nov 6 00:21:27.197492 kubelet[2322]: E1106 00:21:27.197375 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.218411 kubelet[2322]: I1106 00:21:27.218348 2322 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.219722 kubelet[2322]: E1106 00:21:27.219679 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.203.34:6443/api/v1/nodes\": dial tcp 147.182.203.34:6443: connect: connection refused" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.220856 kubelet[2322]: I1106 00:21:27.220815 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22cd155fe1f673d0fc16330f44352da3-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" (UID: \"22cd155fe1f673d0fc16330f44352da3\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.220856 kubelet[2322]: I1106 00:21:27.220860 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22cd155fe1f673d0fc16330f44352da3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" (UID: \"22cd155fe1f673d0fc16330f44352da3\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.221308 kubelet[2322]: I1106 00:21:27.220885 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.221308 kubelet[2322]: I1106 00:21:27.220909 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.221308 kubelet[2322]: I1106 00:21:27.221015 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22cd155fe1f673d0fc16330f44352da3-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" (UID: \"22cd155fe1f673d0fc16330f44352da3\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.221308 kubelet[2322]: I1106 00:21:27.221090 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.221308 kubelet[2322]: I1106 00:21:27.221120 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.221516 kubelet[2322]: I1106 00:21:27.221150 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.221516 kubelet[2322]: I1106 00:21:27.221172 2322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23a8bda8aa165496e43dd9470fd1016-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-800cd2f73d\" (UID: \"b23a8bda8aa165496e43dd9470fd1016\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.237978 kubelet[2322]: E1106 00:21:27.237898 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.203.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-800cd2f73d?timeout=10s\": dial tcp 147.182.203.34:6443: connect: connection refused" interval="400ms" Nov 6 00:21:27.422589 kubelet[2322]: I1106 00:21:27.422333 2322 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.423202 kubelet[2322]: E1106 00:21:27.423159 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.203.34:6443/api/v1/nodes\": dial tcp 147.182.203.34:6443: connect: connection refused" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.477905 kubelet[2322]: E1106 00:21:27.477699 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:27.479297 containerd[1558]: time="2025-11-06T00:21:27.479191568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-800cd2f73d,Uid:22cd155fe1f673d0fc16330f44352da3,Namespace:kube-system,Attempt:0,}" Nov 6 00:21:27.494053 kubelet[2322]: E1106 00:21:27.493995 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:27.499414 kubelet[2322]: E1106 00:21:27.499369 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:27.502312 containerd[1558]: time="2025-11-06T00:21:27.502225201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-800cd2f73d,Uid:dd76df1f2f606a0846991ff43724f78f,Namespace:kube-system,Attempt:0,}" Nov 6 00:21:27.502750 containerd[1558]: time="2025-11-06T00:21:27.502714822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-800cd2f73d,Uid:b23a8bda8aa165496e43dd9470fd1016,Namespace:kube-system,Attempt:0,}" Nov 6 00:21:27.632464 containerd[1558]: time="2025-11-06T00:21:27.632372057Z" level=info msg="connecting to shim 8417daa33d153efdaa28e2cda590a7ee2658e0ebb6f8f37131555150ada4c2c8" address="unix:///run/containerd/s/b74767cce4ac17e1986429eb458da48d0d859a23302ba76315ff2824a17c71b0" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:21:27.639656 kubelet[2322]: E1106 00:21:27.638962 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.182.203.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.1.0-n-800cd2f73d?timeout=10s\": dial tcp 147.182.203.34:6443: connect: connection refused" interval="800ms" Nov 6 00:21:27.648598 containerd[1558]: time="2025-11-06T00:21:27.648189388Z" level=info msg="connecting to shim d7e9f093cebcf03697e12e33ba8f14cf9953de2cbbe368da580907504248872a" address="unix:///run/containerd/s/12ab48a286067ac6705109ca4ad1597918cc2b72da37e902ca1245bb1b1a4517" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:21:27.650897 containerd[1558]: time="2025-11-06T00:21:27.650844599Z" level=info msg="connecting to shim fabbe77a7eda536f11fa67dcb017dc48896fe528d3402993d912483c7c5df1bd" address="unix:///run/containerd/s/af15f3f3f8f8a06baa3fdb887c26e41ced152522106ad71b2626a154d92be849" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:21:27.774287 systemd[1]: Started cri-containerd-8417daa33d153efdaa28e2cda590a7ee2658e0ebb6f8f37131555150ada4c2c8.scope - libcontainer container 8417daa33d153efdaa28e2cda590a7ee2658e0ebb6f8f37131555150ada4c2c8. Nov 6 00:21:27.787569 systemd[1]: Started cri-containerd-d7e9f093cebcf03697e12e33ba8f14cf9953de2cbbe368da580907504248872a.scope - libcontainer container d7e9f093cebcf03697e12e33ba8f14cf9953de2cbbe368da580907504248872a. Nov 6 00:21:27.790380 systemd[1]: Started cri-containerd-fabbe77a7eda536f11fa67dcb017dc48896fe528d3402993d912483c7c5df1bd.scope - libcontainer container fabbe77a7eda536f11fa67dcb017dc48896fe528d3402993d912483c7c5df1bd. Nov 6 00:21:27.831994 kubelet[2322]: I1106 00:21:27.831917 2322 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.833522 kubelet[2322]: E1106 00:21:27.833348 2322 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://147.182.203.34:6443/api/v1/nodes\": dial tcp 147.182.203.34:6443: connect: connection refused" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:27.901529 containerd[1558]: time="2025-11-06T00:21:27.901458723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.1.0-n-800cd2f73d,Uid:b23a8bda8aa165496e43dd9470fd1016,Namespace:kube-system,Attempt:0,} returns sandbox id \"8417daa33d153efdaa28e2cda590a7ee2658e0ebb6f8f37131555150ada4c2c8\"" Nov 6 00:21:27.904895 kubelet[2322]: E1106 00:21:27.904524 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:27.913585 containerd[1558]: time="2025-11-06T00:21:27.913161129Z" level=info msg="CreateContainer within sandbox \"8417daa33d153efdaa28e2cda590a7ee2658e0ebb6f8f37131555150ada4c2c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:21:27.929681 containerd[1558]: time="2025-11-06T00:21:27.929526020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.1.0-n-800cd2f73d,Uid:22cd155fe1f673d0fc16330f44352da3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fabbe77a7eda536f11fa67dcb017dc48896fe528d3402993d912483c7c5df1bd\"" Nov 6 00:21:27.931334 kubelet[2322]: E1106 00:21:27.930917 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:27.937474 containerd[1558]: time="2025-11-06T00:21:27.937100255Z" level=info msg="CreateContainer within sandbox \"fabbe77a7eda536f11fa67dcb017dc48896fe528d3402993d912483c7c5df1bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:21:27.943365 containerd[1558]: time="2025-11-06T00:21:27.943306799Z" level=info msg="Container 58e4e5f66229b275276258878ae1309c16ab28e282eba690a078d7c7beba0fed: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:27.948267 containerd[1558]: time="2025-11-06T00:21:27.947684644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.1.0-n-800cd2f73d,Uid:dd76df1f2f606a0846991ff43724f78f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7e9f093cebcf03697e12e33ba8f14cf9953de2cbbe368da580907504248872a\"" Nov 6 00:21:27.953069 kubelet[2322]: E1106 00:21:27.953008 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:27.962008 containerd[1558]: time="2025-11-06T00:21:27.961868133Z" level=info msg="Container 743025c07d4f12e8ce851438d1780eb52262bc40fbacb32ed5a1e1598a084aaf: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:27.962008 containerd[1558]: time="2025-11-06T00:21:27.961935904Z" level=info msg="CreateContainer within sandbox \"d7e9f093cebcf03697e12e33ba8f14cf9953de2cbbe368da580907504248872a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:21:27.970038 containerd[1558]: time="2025-11-06T00:21:27.969930286Z" level=info msg="CreateContainer within sandbox \"8417daa33d153efdaa28e2cda590a7ee2658e0ebb6f8f37131555150ada4c2c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58e4e5f66229b275276258878ae1309c16ab28e282eba690a078d7c7beba0fed\"" Nov 6 00:21:27.971893 containerd[1558]: time="2025-11-06T00:21:27.971841146Z" level=info msg="StartContainer for \"58e4e5f66229b275276258878ae1309c16ab28e282eba690a078d7c7beba0fed\"" Nov 6 00:21:27.973459 containerd[1558]: time="2025-11-06T00:21:27.973412316Z" level=info msg="connecting to shim 58e4e5f66229b275276258878ae1309c16ab28e282eba690a078d7c7beba0fed" address="unix:///run/containerd/s/b74767cce4ac17e1986429eb458da48d0d859a23302ba76315ff2824a17c71b0" protocol=ttrpc version=3 Nov 6 00:21:27.978634 containerd[1558]: time="2025-11-06T00:21:27.978543922Z" level=info msg="CreateContainer within sandbox \"fabbe77a7eda536f11fa67dcb017dc48896fe528d3402993d912483c7c5df1bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"743025c07d4f12e8ce851438d1780eb52262bc40fbacb32ed5a1e1598a084aaf\"" Nov 6 00:21:27.979998 containerd[1558]: time="2025-11-06T00:21:27.979929222Z" level=info msg="StartContainer for \"743025c07d4f12e8ce851438d1780eb52262bc40fbacb32ed5a1e1598a084aaf\"" Nov 6 00:21:27.982991 containerd[1558]: time="2025-11-06T00:21:27.982905670Z" level=info msg="connecting to shim 743025c07d4f12e8ce851438d1780eb52262bc40fbacb32ed5a1e1598a084aaf" address="unix:///run/containerd/s/af15f3f3f8f8a06baa3fdb887c26e41ced152522106ad71b2626a154d92be849" protocol=ttrpc version=3 Nov 6 00:21:27.984148 containerd[1558]: time="2025-11-06T00:21:27.984001040Z" level=info msg="Container e3d6927faa482e94bf3acc33a3162accb696e3e7e5572d7c49da105ad659c7f3: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:27.998861 containerd[1558]: time="2025-11-06T00:21:27.998733021Z" level=info msg="CreateContainer within sandbox \"d7e9f093cebcf03697e12e33ba8f14cf9953de2cbbe368da580907504248872a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e3d6927faa482e94bf3acc33a3162accb696e3e7e5572d7c49da105ad659c7f3\"" Nov 6 00:21:28.001521 containerd[1558]: time="2025-11-06T00:21:28.001469211Z" level=info msg="StartContainer for \"e3d6927faa482e94bf3acc33a3162accb696e3e7e5572d7c49da105ad659c7f3\"" Nov 6 00:21:28.004300 containerd[1558]: time="2025-11-06T00:21:28.003993259Z" level=info msg="connecting to shim e3d6927faa482e94bf3acc33a3162accb696e3e7e5572d7c49da105ad659c7f3" address="unix:///run/containerd/s/12ab48a286067ac6705109ca4ad1597918cc2b72da37e902ca1245bb1b1a4517" protocol=ttrpc version=3 Nov 6 00:21:28.012984 systemd[1]: Started cri-containerd-58e4e5f66229b275276258878ae1309c16ab28e282eba690a078d7c7beba0fed.scope - libcontainer container 58e4e5f66229b275276258878ae1309c16ab28e282eba690a078d7c7beba0fed. Nov 6 00:21:28.022999 systemd[1]: Started cri-containerd-743025c07d4f12e8ce851438d1780eb52262bc40fbacb32ed5a1e1598a084aaf.scope - libcontainer container 743025c07d4f12e8ce851438d1780eb52262bc40fbacb32ed5a1e1598a084aaf. Nov 6 00:21:28.051990 systemd[1]: Started cri-containerd-e3d6927faa482e94bf3acc33a3162accb696e3e7e5572d7c49da105ad659c7f3.scope - libcontainer container e3d6927faa482e94bf3acc33a3162accb696e3e7e5572d7c49da105ad659c7f3. Nov 6 00:21:28.150670 kubelet[2322]: W1106 00:21:28.150548 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.182.203.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.182.203.34:6443: connect: connection refused Nov 6 00:21:28.150955 kubelet[2322]: E1106 00:21:28.150874 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.182.203.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.182.203.34:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:21:28.171972 kubelet[2322]: W1106 00:21:28.171854 2322 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.182.203.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.182.203.34:6443: connect: connection refused Nov 6 00:21:28.172326 kubelet[2322]: E1106 00:21:28.172240 2322 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.182.203.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.182.203.34:6443: connect: connection refused" logger="UnhandledError" Nov 6 00:21:28.191088 containerd[1558]: time="2025-11-06T00:21:28.190714570Z" level=info msg="StartContainer for \"58e4e5f66229b275276258878ae1309c16ab28e282eba690a078d7c7beba0fed\" returns successfully" Nov 6 00:21:28.192086 containerd[1558]: time="2025-11-06T00:21:28.192038766Z" level=info msg="StartContainer for \"743025c07d4f12e8ce851438d1780eb52262bc40fbacb32ed5a1e1598a084aaf\" returns successfully" Nov 6 00:21:28.216498 containerd[1558]: time="2025-11-06T00:21:28.216412803Z" level=info msg="StartContainer for \"e3d6927faa482e94bf3acc33a3162accb696e3e7e5572d7c49da105ad659c7f3\" returns successfully" Nov 6 00:21:28.635968 kubelet[2322]: I1106 00:21:28.635925 2322 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:29.095409 kubelet[2322]: E1106 00:21:29.095221 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:29.095971 kubelet[2322]: E1106 00:21:29.095455 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:29.103035 kubelet[2322]: E1106 00:21:29.102990 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:29.103225 kubelet[2322]: E1106 00:21:29.103193 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:29.118191 kubelet[2322]: E1106 00:21:29.118144 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:29.118361 kubelet[2322]: E1106 00:21:29.118343 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:30.120096 kubelet[2322]: E1106 00:21:30.120035 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.120742 kubelet[2322]: E1106 00:21:30.120428 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:30.124395 kubelet[2322]: E1106 00:21:30.124337 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.124621 kubelet[2322]: E1106 00:21:30.124578 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:30.124913 kubelet[2322]: E1106 00:21:30.124883 2322 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.125099 kubelet[2322]: E1106 00:21:30.125072 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:30.571950 kubelet[2322]: E1106 00:21:30.571793 2322 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.1.0-n-800cd2f73d\" not found" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.654582 kubelet[2322]: I1106 00:21:30.654479 2322 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.655383 kubelet[2322]: E1106 00:21:30.655282 2322 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.1.0-n-800cd2f73d\": node \"ci-4459.1.0-n-800cd2f73d\" not found" Nov 6 00:21:30.715840 kubelet[2322]: I1106 00:21:30.715426 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.722939 kubelet[2322]: E1106 00:21:30.722873 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.722939 kubelet[2322]: I1106 00:21:30.722927 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.725129 kubelet[2322]: E1106 00:21:30.725074 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-800cd2f73d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.725129 kubelet[2322]: I1106 00:21:30.725126 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.728347 kubelet[2322]: E1106 00:21:30.728271 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:30.996190 kubelet[2322]: I1106 00:21:30.996096 2322 apiserver.go:52] "Watching apiserver" Nov 6 00:21:31.019164 kubelet[2322]: I1106 00:21:31.019070 2322 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:21:31.120080 kubelet[2322]: I1106 00:21:31.120038 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:31.120280 kubelet[2322]: I1106 00:21:31.120100 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:31.124588 kubelet[2322]: E1106 00:21:31.124162 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:31.124588 kubelet[2322]: E1106 00:21:31.124427 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:31.125036 kubelet[2322]: E1106 00:21:31.125017 2322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.1.0-n-800cd2f73d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:31.125533 kubelet[2322]: E1106 00:21:31.125515 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:32.306084 kubelet[2322]: I1106 00:21:32.304686 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:32.314016 kubelet[2322]: W1106 00:21:32.313400 2322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 00:21:32.315859 kubelet[2322]: E1106 00:21:32.315806 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:32.925970 systemd[1]: Reload requested from client PID 2592 ('systemctl') (unit session-7.scope)... Nov 6 00:21:32.926014 systemd[1]: Reloading... Nov 6 00:21:33.046633 zram_generator::config[2632]: No configuration found. Nov 6 00:21:33.121338 kubelet[2322]: I1106 00:21:33.121265 2322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:33.127583 kubelet[2322]: E1106 00:21:33.127508 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:33.131079 kubelet[2322]: W1106 00:21:33.130624 2322 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 00:21:33.131267 kubelet[2322]: E1106 00:21:33.131136 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:33.439080 systemd[1]: Reloading finished in 512 ms. Nov 6 00:21:33.479047 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:21:33.499212 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:21:33.499965 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:21:33.500139 systemd[1]: kubelet.service: Consumed 1.098s CPU time, 125.2M memory peak. Nov 6 00:21:33.505979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:21:33.747194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:21:33.761089 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:21:33.854040 kubelet[2686]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:21:33.854040 kubelet[2686]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:21:33.856831 kubelet[2686]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:21:33.856831 kubelet[2686]: I1106 00:21:33.855423 2686 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:21:33.869091 kubelet[2686]: I1106 00:21:33.869030 2686 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 00:21:33.869363 kubelet[2686]: I1106 00:21:33.869343 2686 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:21:33.870721 kubelet[2686]: I1106 00:21:33.870684 2686 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 00:21:33.874100 kubelet[2686]: I1106 00:21:33.874056 2686 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 6 00:21:33.883304 kubelet[2686]: I1106 00:21:33.883062 2686 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:21:33.892439 kubelet[2686]: I1106 00:21:33.890737 2686 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:21:33.898238 kubelet[2686]: I1106 00:21:33.898189 2686 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:21:33.899029 kubelet[2686]: I1106 00:21:33.898958 2686 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:21:33.899370 kubelet[2686]: I1106 00:21:33.899156 2686 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.1.0-n-800cd2f73d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:21:33.899513 kubelet[2686]: I1106 00:21:33.899503 2686 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:21:33.899576 kubelet[2686]: I1106 00:21:33.899569 2686 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 00:21:33.899694 kubelet[2686]: I1106 00:21:33.899686 2686 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:21:33.899935 kubelet[2686]: I1106 00:21:33.899925 2686 kubelet.go:446] "Attempting to sync node with API server" Nov 6 00:21:33.900922 kubelet[2686]: I1106 00:21:33.900900 2686 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:21:33.901088 kubelet[2686]: I1106 00:21:33.901074 2686 kubelet.go:352] "Adding apiserver pod source" Nov 6 00:21:33.901157 kubelet[2686]: I1106 00:21:33.901149 2686 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:21:33.908085 kubelet[2686]: I1106 00:21:33.908040 2686 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:21:33.911833 kubelet[2686]: I1106 00:21:33.911784 2686 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 00:21:33.915056 kubelet[2686]: I1106 00:21:33.914892 2686 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:21:33.916507 kubelet[2686]: I1106 00:21:33.916478 2686 server.go:1287] "Started kubelet" Nov 6 00:21:33.939191 sudo[2700]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 00:21:33.940121 sudo[2700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 00:21:33.944227 kubelet[2686]: I1106 00:21:33.943523 2686 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:21:33.957797 kubelet[2686]: I1106 00:21:33.957717 2686 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:21:33.965780 kubelet[2686]: I1106 00:21:33.964748 2686 server.go:479] "Adding debug handlers to kubelet server" Nov 6 00:21:33.966412 kubelet[2686]: I1106 00:21:33.959422 2686 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:21:33.966934 kubelet[2686]: I1106 00:21:33.958122 2686 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:21:33.967249 kubelet[2686]: I1106 00:21:33.967232 2686 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:21:33.967381 kubelet[2686]: I1106 00:21:33.962834 2686 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:21:33.967456 kubelet[2686]: I1106 00:21:33.962813 2686 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:21:33.968258 kubelet[2686]: I1106 00:21:33.968233 2686 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:21:33.976039 kubelet[2686]: I1106 00:21:33.975855 2686 factory.go:221] Registration of the systemd container factory successfully Nov 6 00:21:33.976485 kubelet[2686]: I1106 00:21:33.976459 2686 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:21:33.980197 kubelet[2686]: E1106 00:21:33.980156 2686 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:21:33.981406 kubelet[2686]: I1106 00:21:33.981321 2686 factory.go:221] Registration of the containerd container factory successfully Nov 6 00:21:34.013735 kubelet[2686]: I1106 00:21:34.013359 2686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 00:21:34.023598 kubelet[2686]: I1106 00:21:34.023025 2686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 00:21:34.023598 kubelet[2686]: I1106 00:21:34.023070 2686 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 00:21:34.023598 kubelet[2686]: I1106 00:21:34.023116 2686 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:21:34.023598 kubelet[2686]: I1106 00:21:34.023126 2686 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 00:21:34.023598 kubelet[2686]: E1106 00:21:34.023204 2686 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:21:34.116308 kubelet[2686]: I1106 00:21:34.116211 2686 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:21:34.117090 kubelet[2686]: I1106 00:21:34.116642 2686 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:21:34.117090 kubelet[2686]: I1106 00:21:34.116690 2686 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:21:34.117403 kubelet[2686]: I1106 00:21:34.117379 2686 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:21:34.117499 kubelet[2686]: I1106 00:21:34.117462 2686 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:21:34.117660 kubelet[2686]: I1106 00:21:34.117637 2686 policy_none.go:49] "None policy: Start" Nov 6 00:21:34.117853 kubelet[2686]: I1106 00:21:34.117834 2686 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:21:34.118040 kubelet[2686]: I1106 00:21:34.118024 2686 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:21:34.118481 kubelet[2686]: I1106 00:21:34.118393 2686 state_mem.go:75] "Updated machine memory state" Nov 6 00:21:34.123331 kubelet[2686]: E1106 00:21:34.123278 2686 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:21:34.128354 kubelet[2686]: I1106 00:21:34.128232 2686 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 00:21:34.129937 kubelet[2686]: I1106 00:21:34.128752 2686 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:21:34.129937 kubelet[2686]: I1106 00:21:34.128772 2686 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:21:34.129937 kubelet[2686]: I1106 00:21:34.129146 2686 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:21:34.142052 kubelet[2686]: E1106 00:21:34.140527 2686 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:21:34.245171 kubelet[2686]: I1106 00:21:34.244642 2686 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.257971 kubelet[2686]: I1106 00:21:34.257799 2686 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.258379 kubelet[2686]: I1106 00:21:34.258175 2686 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.325729 kubelet[2686]: I1106 00:21:34.324748 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.332913 kubelet[2686]: I1106 00:21:34.332752 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.333173 kubelet[2686]: I1106 00:21:34.333111 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.346318 kubelet[2686]: W1106 00:21:34.346087 2686 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 00:21:34.349596 kubelet[2686]: W1106 00:21:34.348600 2686 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 00:21:34.349596 kubelet[2686]: E1106 00:21:34.348736 2686 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" already exists" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.353903 kubelet[2686]: W1106 00:21:34.353622 2686 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 00:21:34.354063 kubelet[2686]: E1106 00:21:34.353995 2686 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.371712 kubelet[2686]: I1106 00:21:34.371648 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/22cd155fe1f673d0fc16330f44352da3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" (UID: \"22cd155fe1f673d0fc16330f44352da3\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.371712 kubelet[2686]: I1106 00:21:34.371723 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.371870 kubelet[2686]: I1106 00:21:34.371757 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-kubeconfig\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.371870 kubelet[2686]: I1106 00:21:34.371789 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.371870 kubelet[2686]: I1106 00:21:34.371825 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b23a8bda8aa165496e43dd9470fd1016-kubeconfig\") pod \"kube-scheduler-ci-4459.1.0-n-800cd2f73d\" (UID: \"b23a8bda8aa165496e43dd9470fd1016\") " pod="kube-system/kube-scheduler-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.371870 kubelet[2686]: I1106 00:21:34.371856 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/22cd155fe1f673d0fc16330f44352da3-k8s-certs\") pod \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" (UID: \"22cd155fe1f673d0fc16330f44352da3\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.372037 kubelet[2686]: I1106 00:21:34.371880 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-k8s-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.372037 kubelet[2686]: I1106 00:21:34.371906 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/22cd155fe1f673d0fc16330f44352da3-ca-certs\") pod \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" (UID: \"22cd155fe1f673d0fc16330f44352da3\") " pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.372037 kubelet[2686]: I1106 00:21:34.371944 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd76df1f2f606a0846991ff43724f78f-ca-certs\") pod \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" (UID: \"dd76df1f2f606a0846991ff43724f78f\") " pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:34.641018 sudo[2700]: pam_unix(sudo:session): session closed for user root Nov 6 00:21:34.648547 kubelet[2686]: E1106 00:21:34.648257 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:34.650367 kubelet[2686]: E1106 00:21:34.649802 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:34.654493 kubelet[2686]: E1106 00:21:34.654438 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:34.904292 kubelet[2686]: I1106 00:21:34.904121 2686 apiserver.go:52] "Watching apiserver" Nov 6 00:21:34.968364 kubelet[2686]: I1106 00:21:34.968305 2686 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:21:35.025588 kubelet[2686]: I1106 00:21:35.025124 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" podStartSLOduration=2.025070471 podStartE2EDuration="2.025070471s" podCreationTimestamp="2025-11-06 00:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:21:35.023259528 +0000 UTC m=+1.254224662" watchObservedRunningTime="2025-11-06 00:21:35.025070471 +0000 UTC m=+1.256035595" Nov 6 00:21:35.058907 kubelet[2686]: I1106 00:21:35.058539 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" podStartSLOduration=3.058509705 podStartE2EDuration="3.058509705s" podCreationTimestamp="2025-11-06 00:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:21:35.044604681 +0000 UTC m=+1.275569829" watchObservedRunningTime="2025-11-06 00:21:35.058509705 +0000 UTC m=+1.289474842" Nov 6 00:21:35.077601 kubelet[2686]: I1106 00:21:35.077324 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:35.082599 kubelet[2686]: I1106 00:21:35.079754 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:35.083043 kubelet[2686]: E1106 00:21:35.081953 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:35.083192 kubelet[2686]: I1106 00:21:35.079501 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.1.0-n-800cd2f73d" podStartSLOduration=1.079471217 podStartE2EDuration="1.079471217s" podCreationTimestamp="2025-11-06 00:21:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:21:35.060870297 +0000 UTC m=+1.291835436" watchObservedRunningTime="2025-11-06 00:21:35.079471217 +0000 UTC m=+1.310436342" Nov 6 00:21:35.108958 kubelet[2686]: W1106 00:21:35.108911 2686 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 00:21:35.108958 kubelet[2686]: W1106 00:21:35.108957 2686 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 6 00:21:35.109179 kubelet[2686]: E1106 00:21:35.109008 2686 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.1.0-n-800cd2f73d\" already exists" pod="kube-system/kube-controller-manager-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:35.109270 kubelet[2686]: E1106 00:21:35.109249 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:35.109394 kubelet[2686]: E1106 00:21:35.109354 2686 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.1.0-n-800cd2f73d\" already exists" pod="kube-system/kube-apiserver-ci-4459.1.0-n-800cd2f73d" Nov 6 00:21:35.109505 kubelet[2686]: E1106 00:21:35.109489 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:36.086657 kubelet[2686]: E1106 00:21:36.086043 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:36.088603 kubelet[2686]: E1106 00:21:36.088578 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:36.091640 kubelet[2686]: E1106 00:21:36.088977 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:36.451041 sudo[1762]: pam_unix(sudo:session): session closed for user root Nov 6 00:21:36.454988 sshd[1761]: Connection closed by 139.178.68.195 port 59196 Nov 6 00:21:36.456134 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Nov 6 00:21:36.460418 systemd[1]: sshd@6-147.182.203.34:22-139.178.68.195:59196.service: Deactivated successfully. Nov 6 00:21:36.465676 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:21:36.466335 systemd[1]: session-7.scope: Consumed 6.312s CPU time, 223.1M memory peak. Nov 6 00:21:36.469846 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:21:36.472631 systemd-logind[1528]: Removed session 7. Nov 6 00:21:37.086192 kubelet[2686]: E1106 00:21:37.086145 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:38.028021 kubelet[2686]: I1106 00:21:38.027946 2686 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:21:38.031615 containerd[1558]: time="2025-11-06T00:21:38.031463291Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:21:38.032283 kubelet[2686]: I1106 00:21:38.032257 2686 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:21:38.089320 kubelet[2686]: E1106 00:21:38.089259 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:38.787957 systemd[1]: Created slice kubepods-besteffort-podeedd1725_ee62_4428_9e66_eefd9410d269.slice - libcontainer container kubepods-besteffort-podeedd1725_ee62_4428_9e66_eefd9410d269.slice. Nov 6 00:21:38.800537 kubelet[2686]: I1106 00:21:38.800356 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eedd1725-ee62-4428-9e66-eefd9410d269-lib-modules\") pod \"kube-proxy-gdvpm\" (UID: \"eedd1725-ee62-4428-9e66-eefd9410d269\") " pod="kube-system/kube-proxy-gdvpm" Nov 6 00:21:38.800537 kubelet[2686]: I1106 00:21:38.800458 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-config-path\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800537 kubelet[2686]: I1106 00:21:38.800498 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-host-proc-sys-kernel\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800537 kubelet[2686]: I1106 00:21:38.800515 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eedd1725-ee62-4428-9e66-eefd9410d269-xtables-lock\") pod \"kube-proxy-gdvpm\" (UID: \"eedd1725-ee62-4428-9e66-eefd9410d269\") " pod="kube-system/kube-proxy-gdvpm" Nov 6 00:21:38.800799 kubelet[2686]: I1106 00:21:38.800530 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-hostproc\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800799 kubelet[2686]: I1106 00:21:38.800599 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-lib-modules\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800799 kubelet[2686]: I1106 00:21:38.800617 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxdnx\" (UniqueName: \"kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-kube-api-access-sxdnx\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800799 kubelet[2686]: I1106 00:21:38.800656 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-bpf-maps\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800799 kubelet[2686]: I1106 00:21:38.800673 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cni-path\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800799 kubelet[2686]: I1106 00:21:38.800688 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-cgroup\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800967 kubelet[2686]: I1106 00:21:38.800704 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/326ba752-dd09-42d1-82e8-2bf0fef820b9-clustermesh-secrets\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800967 kubelet[2686]: I1106 00:21:38.800749 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eedd1725-ee62-4428-9e66-eefd9410d269-kube-proxy\") pod \"kube-proxy-gdvpm\" (UID: \"eedd1725-ee62-4428-9e66-eefd9410d269\") " pod="kube-system/kube-proxy-gdvpm" Nov 6 00:21:38.800967 kubelet[2686]: I1106 00:21:38.800772 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4rdt\" (UniqueName: \"kubernetes.io/projected/eedd1725-ee62-4428-9e66-eefd9410d269-kube-api-access-d4rdt\") pod \"kube-proxy-gdvpm\" (UID: \"eedd1725-ee62-4428-9e66-eefd9410d269\") " pod="kube-system/kube-proxy-gdvpm" Nov 6 00:21:38.800967 kubelet[2686]: I1106 00:21:38.800789 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-etc-cni-netd\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800967 kubelet[2686]: I1106 00:21:38.800824 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-run\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.800967 kubelet[2686]: I1106 00:21:38.800849 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-hubble-tls\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.801163 kubelet[2686]: I1106 00:21:38.800871 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-xtables-lock\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.801163 kubelet[2686]: I1106 00:21:38.800907 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-host-proc-sys-net\") pod \"cilium-hm2gx\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " pod="kube-system/cilium-hm2gx" Nov 6 00:21:38.805438 systemd[1]: Created slice kubepods-burstable-pod326ba752_dd09_42d1_82e8_2bf0fef820b9.slice - libcontainer container kubepods-burstable-pod326ba752_dd09_42d1_82e8_2bf0fef820b9.slice. Nov 6 00:21:38.935717 kubelet[2686]: E1106 00:21:38.930510 2686 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 00:21:38.935717 kubelet[2686]: E1106 00:21:38.930917 2686 projected.go:194] Error preparing data for projected volume kube-api-access-sxdnx for pod kube-system/cilium-hm2gx: configmap "kube-root-ca.crt" not found Nov 6 00:21:38.935717 kubelet[2686]: E1106 00:21:38.931187 2686 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-kube-api-access-sxdnx podName:326ba752-dd09-42d1-82e8-2bf0fef820b9 nodeName:}" failed. No retries permitted until 2025-11-06 00:21:39.431034728 +0000 UTC m=+5.661999842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sxdnx" (UniqueName: "kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-kube-api-access-sxdnx") pod "cilium-hm2gx" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9") : configmap "kube-root-ca.crt" not found Nov 6 00:21:38.940822 kubelet[2686]: E1106 00:21:38.940319 2686 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 00:21:38.940822 kubelet[2686]: E1106 00:21:38.940771 2686 projected.go:194] Error preparing data for projected volume kube-api-access-d4rdt for pod kube-system/kube-proxy-gdvpm: configmap "kube-root-ca.crt" not found Nov 6 00:21:38.941070 kubelet[2686]: E1106 00:21:38.940945 2686 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eedd1725-ee62-4428-9e66-eefd9410d269-kube-api-access-d4rdt podName:eedd1725-ee62-4428-9e66-eefd9410d269 nodeName:}" failed. No retries permitted until 2025-11-06 00:21:39.440918973 +0000 UTC m=+5.671884087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d4rdt" (UniqueName: "kubernetes.io/projected/eedd1725-ee62-4428-9e66-eefd9410d269-kube-api-access-d4rdt") pod "kube-proxy-gdvpm" (UID: "eedd1725-ee62-4428-9e66-eefd9410d269") : configmap "kube-root-ca.crt" not found Nov 6 00:21:39.084974 systemd[1]: Created slice kubepods-besteffort-poda9e0a8c3_2891_4c61_b5e2_42842480f843.slice - libcontainer container kubepods-besteffort-poda9e0a8c3_2891_4c61_b5e2_42842480f843.slice. Nov 6 00:21:39.106687 kubelet[2686]: I1106 00:21:39.105914 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9e0a8c3-2891-4c61-b5e2-42842480f843-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6527p\" (UID: \"a9e0a8c3-2891-4c61-b5e2-42842480f843\") " pod="kube-system/cilium-operator-6c4d7847fc-6527p" Nov 6 00:21:39.108581 kubelet[2686]: I1106 00:21:39.107535 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5vn7\" (UniqueName: \"kubernetes.io/projected/a9e0a8c3-2891-4c61-b5e2-42842480f843-kube-api-access-r5vn7\") pod \"cilium-operator-6c4d7847fc-6527p\" (UID: \"a9e0a8c3-2891-4c61-b5e2-42842480f843\") " pod="kube-system/cilium-operator-6c4d7847fc-6527p" Nov 6 00:21:39.389722 kubelet[2686]: E1106 00:21:39.389655 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:39.390695 containerd[1558]: time="2025-11-06T00:21:39.390630948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6527p,Uid:a9e0a8c3-2891-4c61-b5e2-42842480f843,Namespace:kube-system,Attempt:0,}" Nov 6 00:21:39.427970 containerd[1558]: time="2025-11-06T00:21:39.427864683Z" level=info msg="connecting to shim fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92" address="unix:///run/containerd/s/d55ed080ba55706ad0de1a14ae6d5538271d387defbf4258e3502b34e4e14bba" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:21:39.472966 systemd[1]: Started cri-containerd-fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92.scope - libcontainer container fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92. Nov 6 00:21:39.560008 containerd[1558]: time="2025-11-06T00:21:39.559868329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6527p,Uid:a9e0a8c3-2891-4c61-b5e2-42842480f843,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\"" Nov 6 00:21:39.562086 kubelet[2686]: E1106 00:21:39.561759 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:39.566196 containerd[1558]: time="2025-11-06T00:21:39.566148495Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 00:21:39.570235 systemd-resolved[1437]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Nov 6 00:21:39.698162 kubelet[2686]: E1106 00:21:39.697792 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:39.701611 containerd[1558]: time="2025-11-06T00:21:39.699807736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gdvpm,Uid:eedd1725-ee62-4428-9e66-eefd9410d269,Namespace:kube-system,Attempt:0,}" Nov 6 00:21:39.711182 kubelet[2686]: E1106 00:21:39.710737 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:39.711522 containerd[1558]: time="2025-11-06T00:21:39.711465187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hm2gx,Uid:326ba752-dd09-42d1-82e8-2bf0fef820b9,Namespace:kube-system,Attempt:0,}" Nov 6 00:21:39.741824 containerd[1558]: time="2025-11-06T00:21:39.741749783Z" level=info msg="connecting to shim 9e505bbd773905c58d8face5b917aa9f29720696f3e054de5eb107e5dbcc3929" address="unix:///run/containerd/s/57f5e04cb80886f02582c0a5f046e7d7d119e0595131afdb9e4046c6e4fada86" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:21:39.747327 containerd[1558]: time="2025-11-06T00:21:39.747189851Z" level=info msg="connecting to shim a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040" address="unix:///run/containerd/s/379a80748275068110590663aeabbb57922bf14a29640e079888b496a6ef80bf" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:21:39.775840 systemd[1]: Started cri-containerd-9e505bbd773905c58d8face5b917aa9f29720696f3e054de5eb107e5dbcc3929.scope - libcontainer container 9e505bbd773905c58d8face5b917aa9f29720696f3e054de5eb107e5dbcc3929. Nov 6 00:21:39.788350 systemd[1]: Started cri-containerd-a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040.scope - libcontainer container a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040. Nov 6 00:21:39.857296 containerd[1558]: time="2025-11-06T00:21:39.857237423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gdvpm,Uid:eedd1725-ee62-4428-9e66-eefd9410d269,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e505bbd773905c58d8face5b917aa9f29720696f3e054de5eb107e5dbcc3929\"" Nov 6 00:21:39.859257 kubelet[2686]: E1106 00:21:39.859219 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:39.868533 containerd[1558]: time="2025-11-06T00:21:39.868459311Z" level=info msg="CreateContainer within sandbox \"9e505bbd773905c58d8face5b917aa9f29720696f3e054de5eb107e5dbcc3929\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:21:39.873468 containerd[1558]: time="2025-11-06T00:21:39.873390864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hm2gx,Uid:326ba752-dd09-42d1-82e8-2bf0fef820b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\"" Nov 6 00:21:39.875845 kubelet[2686]: E1106 00:21:39.875715 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:39.887649 containerd[1558]: time="2025-11-06T00:21:39.887539179Z" level=info msg="Container 7c5d4733adbf720b029c8b9d2815d2644849ee2c3f2d52c7b1523beb5bb59d35: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:39.898909 containerd[1558]: time="2025-11-06T00:21:39.898848852Z" level=info msg="CreateContainer within sandbox \"9e505bbd773905c58d8face5b917aa9f29720696f3e054de5eb107e5dbcc3929\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7c5d4733adbf720b029c8b9d2815d2644849ee2c3f2d52c7b1523beb5bb59d35\"" Nov 6 00:21:39.899982 containerd[1558]: time="2025-11-06T00:21:39.899811539Z" level=info msg="StartContainer for \"7c5d4733adbf720b029c8b9d2815d2644849ee2c3f2d52c7b1523beb5bb59d35\"" Nov 6 00:21:39.912731 containerd[1558]: time="2025-11-06T00:21:39.912528079Z" level=info msg="connecting to shim 7c5d4733adbf720b029c8b9d2815d2644849ee2c3f2d52c7b1523beb5bb59d35" address="unix:///run/containerd/s/57f5e04cb80886f02582c0a5f046e7d7d119e0595131afdb9e4046c6e4fada86" protocol=ttrpc version=3 Nov 6 00:21:39.963959 systemd[1]: Started cri-containerd-7c5d4733adbf720b029c8b9d2815d2644849ee2c3f2d52c7b1523beb5bb59d35.scope - libcontainer container 7c5d4733adbf720b029c8b9d2815d2644849ee2c3f2d52c7b1523beb5bb59d35. Nov 6 00:21:40.039004 containerd[1558]: time="2025-11-06T00:21:40.038845072Z" level=info msg="StartContainer for \"7c5d4733adbf720b029c8b9d2815d2644849ee2c3f2d52c7b1523beb5bb59d35\" returns successfully" Nov 6 00:21:40.107216 kubelet[2686]: E1106 00:21:40.107163 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:40.122696 kubelet[2686]: I1106 00:21:40.122536 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gdvpm" podStartSLOduration=2.122498383 podStartE2EDuration="2.122498383s" podCreationTimestamp="2025-11-06 00:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:21:40.122497397 +0000 UTC m=+6.353462518" watchObservedRunningTime="2025-11-06 00:21:40.122498383 +0000 UTC m=+6.353463515" Nov 6 00:21:40.385452 kubelet[2686]: E1106 00:21:40.384675 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:40.918223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687405720.mount: Deactivated successfully. Nov 6 00:21:41.114179 kubelet[2686]: E1106 00:21:41.114130 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:42.778749 containerd[1558]: time="2025-11-06T00:21:42.777887090Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:42.779736 containerd[1558]: time="2025-11-06T00:21:42.779403547Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 6 00:21:42.782097 containerd[1558]: time="2025-11-06T00:21:42.780649184Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:42.782097 containerd[1558]: time="2025-11-06T00:21:42.781952424Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.215309776s" Nov 6 00:21:42.782097 containerd[1558]: time="2025-11-06T00:21:42.781990883Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 6 00:21:42.786023 containerd[1558]: time="2025-11-06T00:21:42.785804103Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 00:21:42.788260 containerd[1558]: time="2025-11-06T00:21:42.787675207Z" level=info msg="CreateContainer within sandbox \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 00:21:42.802835 containerd[1558]: time="2025-11-06T00:21:42.802771045Z" level=info msg="Container af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:42.816582 containerd[1558]: time="2025-11-06T00:21:42.816393388Z" level=info msg="CreateContainer within sandbox \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\"" Nov 6 00:21:42.817610 containerd[1558]: time="2025-11-06T00:21:42.817388033Z" level=info msg="StartContainer for \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\"" Nov 6 00:21:42.822127 containerd[1558]: time="2025-11-06T00:21:42.822058655Z" level=info msg="connecting to shim af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2" address="unix:///run/containerd/s/d55ed080ba55706ad0de1a14ae6d5538271d387defbf4258e3502b34e4e14bba" protocol=ttrpc version=3 Nov 6 00:21:42.853033 systemd[1]: Started cri-containerd-af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2.scope - libcontainer container af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2. Nov 6 00:21:42.914516 containerd[1558]: time="2025-11-06T00:21:42.914450569Z" level=info msg="StartContainer for \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" returns successfully" Nov 6 00:21:43.124511 kubelet[2686]: E1106 00:21:43.124466 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:43.983745 kubelet[2686]: E1106 00:21:43.983688 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:44.031602 kubelet[2686]: I1106 00:21:44.028355 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6527p" podStartSLOduration=1.808099122 podStartE2EDuration="5.028313114s" podCreationTimestamp="2025-11-06 00:21:39 +0000 UTC" firstStartedPulling="2025-11-06 00:21:39.564171317 +0000 UTC m=+5.795136433" lastFinishedPulling="2025-11-06 00:21:42.784385303 +0000 UTC m=+9.015350425" observedRunningTime="2025-11-06 00:21:43.171008695 +0000 UTC m=+9.401973817" watchObservedRunningTime="2025-11-06 00:21:44.028313114 +0000 UTC m=+10.259278229" Nov 6 00:21:44.155059 kubelet[2686]: E1106 00:21:44.155013 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:44.156583 kubelet[2686]: E1106 00:21:44.155891 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:47.429726 kubelet[2686]: E1106 00:21:47.429266 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:47.736076 update_engine[1534]: I20251106 00:21:47.735623 1534 update_attempter.cc:509] Updating boot flags... Nov 6 00:21:48.241717 kubelet[2686]: E1106 00:21:48.241663 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:48.412698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4155984406.mount: Deactivated successfully. Nov 6 00:21:51.314417 containerd[1558]: time="2025-11-06T00:21:51.314197753Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:51.316101 containerd[1558]: time="2025-11-06T00:21:51.315686242Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 6 00:21:51.316101 containerd[1558]: time="2025-11-06T00:21:51.316044420Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:21:51.318188 containerd[1558]: time="2025-11-06T00:21:51.318131421Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.531622184s" Nov 6 00:21:51.318326 containerd[1558]: time="2025-11-06T00:21:51.318192953Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 6 00:21:51.323248 containerd[1558]: time="2025-11-06T00:21:51.323182307Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:21:51.349604 containerd[1558]: time="2025-11-06T00:21:51.347999786Z" level=info msg="Container 9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:51.355760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066263887.mount: Deactivated successfully. Nov 6 00:21:51.364967 containerd[1558]: time="2025-11-06T00:21:51.364816135Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\"" Nov 6 00:21:51.366468 containerd[1558]: time="2025-11-06T00:21:51.366391110Z" level=info msg="StartContainer for \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\"" Nov 6 00:21:51.368745 containerd[1558]: time="2025-11-06T00:21:51.368533234Z" level=info msg="connecting to shim 9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a" address="unix:///run/containerd/s/379a80748275068110590663aeabbb57922bf14a29640e079888b496a6ef80bf" protocol=ttrpc version=3 Nov 6 00:21:51.412882 systemd[1]: Started cri-containerd-9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a.scope - libcontainer container 9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a. Nov 6 00:21:51.482500 systemd[1]: cri-containerd-9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a.scope: Deactivated successfully. Nov 6 00:21:51.490862 containerd[1558]: time="2025-11-06T00:21:51.490715260Z" level=info msg="StartContainer for \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\" returns successfully" Nov 6 00:21:51.533566 containerd[1558]: time="2025-11-06T00:21:51.533343972Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\" id:\"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\" pid:3171 exited_at:{seconds:1762388511 nanos:491906667}" Nov 6 00:21:51.534908 containerd[1558]: time="2025-11-06T00:21:51.534845031Z" level=info msg="received exit event container_id:\"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\" id:\"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\" pid:3171 exited_at:{seconds:1762388511 nanos:491906667}" Nov 6 00:21:51.574753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a-rootfs.mount: Deactivated successfully. Nov 6 00:21:52.251585 kubelet[2686]: E1106 00:21:52.251454 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:52.259344 containerd[1558]: time="2025-11-06T00:21:52.259287156Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:21:52.277884 containerd[1558]: time="2025-11-06T00:21:52.277804306Z" level=info msg="Container c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:52.288072 containerd[1558]: time="2025-11-06T00:21:52.287988799Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\"" Nov 6 00:21:52.290137 containerd[1558]: time="2025-11-06T00:21:52.290009325Z" level=info msg="StartContainer for \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\"" Nov 6 00:21:52.292869 containerd[1558]: time="2025-11-06T00:21:52.292723314Z" level=info msg="connecting to shim c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af" address="unix:///run/containerd/s/379a80748275068110590663aeabbb57922bf14a29640e079888b496a6ef80bf" protocol=ttrpc version=3 Nov 6 00:21:52.324989 systemd[1]: Started cri-containerd-c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af.scope - libcontainer container c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af. Nov 6 00:21:52.387765 containerd[1558]: time="2025-11-06T00:21:52.387648761Z" level=info msg="StartContainer for \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\" returns successfully" Nov 6 00:21:52.406106 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:21:52.407051 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:21:52.409872 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:21:52.413078 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:21:52.418978 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:21:52.420085 systemd[1]: cri-containerd-c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af.scope: Deactivated successfully. Nov 6 00:21:52.426137 containerd[1558]: time="2025-11-06T00:21:52.426065684Z" level=info msg="received exit event container_id:\"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\" id:\"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\" pid:3215 exited_at:{seconds:1762388512 nanos:425675528}" Nov 6 00:21:52.426955 containerd[1558]: time="2025-11-06T00:21:52.426887080Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\" id:\"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\" pid:3215 exited_at:{seconds:1762388512 nanos:425675528}" Nov 6 00:21:52.463936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:21:52.476208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af-rootfs.mount: Deactivated successfully. Nov 6 00:21:53.264598 kubelet[2686]: E1106 00:21:53.264438 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:53.271722 containerd[1558]: time="2025-11-06T00:21:53.271612042Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:21:53.295858 containerd[1558]: time="2025-11-06T00:21:53.295779321Z" level=info msg="Container 4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:53.321118 containerd[1558]: time="2025-11-06T00:21:53.321041447Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\"" Nov 6 00:21:53.323212 containerd[1558]: time="2025-11-06T00:21:53.322940750Z" level=info msg="StartContainer for \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\"" Nov 6 00:21:53.325976 containerd[1558]: time="2025-11-06T00:21:53.325911414Z" level=info msg="connecting to shim 4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a" address="unix:///run/containerd/s/379a80748275068110590663aeabbb57922bf14a29640e079888b496a6ef80bf" protocol=ttrpc version=3 Nov 6 00:21:53.372030 systemd[1]: Started cri-containerd-4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a.scope - libcontainer container 4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a. Nov 6 00:21:53.443461 systemd[1]: cri-containerd-4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a.scope: Deactivated successfully. Nov 6 00:21:53.448327 containerd[1558]: time="2025-11-06T00:21:53.447378987Z" level=info msg="StartContainer for \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\" returns successfully" Nov 6 00:21:53.453319 containerd[1558]: time="2025-11-06T00:21:53.452911277Z" level=info msg="received exit event container_id:\"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\" id:\"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\" pid:3262 exited_at:{seconds:1762388513 nanos:450359776}" Nov 6 00:21:53.458789 containerd[1558]: time="2025-11-06T00:21:53.458541899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\" id:\"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\" pid:3262 exited_at:{seconds:1762388513 nanos:450359776}" Nov 6 00:21:53.493879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a-rootfs.mount: Deactivated successfully. Nov 6 00:21:54.275334 kubelet[2686]: E1106 00:21:54.275248 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:54.285659 containerd[1558]: time="2025-11-06T00:21:54.284922298Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:21:54.346602 containerd[1558]: time="2025-11-06T00:21:54.345275122Z" level=info msg="Container e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:54.372410 containerd[1558]: time="2025-11-06T00:21:54.372320569Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\"" Nov 6 00:21:54.374610 containerd[1558]: time="2025-11-06T00:21:54.373446235Z" level=info msg="StartContainer for \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\"" Nov 6 00:21:54.374950 containerd[1558]: time="2025-11-06T00:21:54.374915399Z" level=info msg="connecting to shim e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c" address="unix:///run/containerd/s/379a80748275068110590663aeabbb57922bf14a29640e079888b496a6ef80bf" protocol=ttrpc version=3 Nov 6 00:21:54.407901 systemd[1]: Started cri-containerd-e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c.scope - libcontainer container e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c. Nov 6 00:21:54.464486 systemd[1]: cri-containerd-e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c.scope: Deactivated successfully. Nov 6 00:21:54.468483 containerd[1558]: time="2025-11-06T00:21:54.468325482Z" level=info msg="received exit event container_id:\"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\" id:\"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\" pid:3303 exited_at:{seconds:1762388514 nanos:467954475}" Nov 6 00:21:54.471365 containerd[1558]: time="2025-11-06T00:21:54.471229332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\" id:\"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\" pid:3303 exited_at:{seconds:1762388514 nanos:467954475}" Nov 6 00:21:54.472011 containerd[1558]: time="2025-11-06T00:21:54.471409009Z" level=info msg="StartContainer for \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\" returns successfully" Nov 6 00:21:54.508796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c-rootfs.mount: Deactivated successfully. Nov 6 00:21:55.291597 kubelet[2686]: E1106 00:21:55.290957 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:55.297238 containerd[1558]: time="2025-11-06T00:21:55.297079452Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:21:55.337645 containerd[1558]: time="2025-11-06T00:21:55.336399692Z" level=info msg="Container f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:21:55.339654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1082033066.mount: Deactivated successfully. Nov 6 00:21:55.356446 containerd[1558]: time="2025-11-06T00:21:55.356356667Z" level=info msg="CreateContainer within sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\"" Nov 6 00:21:55.357977 containerd[1558]: time="2025-11-06T00:21:55.357876402Z" level=info msg="StartContainer for \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\"" Nov 6 00:21:55.359737 containerd[1558]: time="2025-11-06T00:21:55.359655622Z" level=info msg="connecting to shim f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1" address="unix:///run/containerd/s/379a80748275068110590663aeabbb57922bf14a29640e079888b496a6ef80bf" protocol=ttrpc version=3 Nov 6 00:21:55.398871 systemd[1]: Started cri-containerd-f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1.scope - libcontainer container f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1. Nov 6 00:21:55.476703 containerd[1558]: time="2025-11-06T00:21:55.476641363Z" level=info msg="StartContainer for \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" returns successfully" Nov 6 00:21:55.624454 containerd[1558]: time="2025-11-06T00:21:55.624329821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" id:\"9a14e75c3ba146841f5148fbd01a466e4377ae9a9f12fb7571ccde7691c6a4cc\" pid:3372 exited_at:{seconds:1762388515 nanos:622615835}" Nov 6 00:21:55.699332 kubelet[2686]: I1106 00:21:55.697832 2686 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:21:55.762872 systemd[1]: Created slice kubepods-burstable-pod90370422_3468_4938_ac6d_dcdf8dd21c0f.slice - libcontainer container kubepods-burstable-pod90370422_3468_4938_ac6d_dcdf8dd21c0f.slice. Nov 6 00:21:55.771952 systemd[1]: Created slice kubepods-burstable-podee23f997_e996_4265_8dd0_d6a808bc991e.slice - libcontainer container kubepods-burstable-podee23f997_e996_4265_8dd0_d6a808bc991e.slice. Nov 6 00:21:55.852620 kubelet[2686]: I1106 00:21:55.852521 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90370422-3468-4938-ac6d-dcdf8dd21c0f-config-volume\") pod \"coredns-668d6bf9bc-dvrqp\" (UID: \"90370422-3468-4938-ac6d-dcdf8dd21c0f\") " pod="kube-system/coredns-668d6bf9bc-dvrqp" Nov 6 00:21:55.853067 kubelet[2686]: I1106 00:21:55.853018 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvsm\" (UniqueName: \"kubernetes.io/projected/90370422-3468-4938-ac6d-dcdf8dd21c0f-kube-api-access-qbvsm\") pod \"coredns-668d6bf9bc-dvrqp\" (UID: \"90370422-3468-4938-ac6d-dcdf8dd21c0f\") " pod="kube-system/coredns-668d6bf9bc-dvrqp" Nov 6 00:21:55.853268 kubelet[2686]: I1106 00:21:55.853208 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpzrc\" (UniqueName: \"kubernetes.io/projected/ee23f997-e996-4265-8dd0-d6a808bc991e-kube-api-access-tpzrc\") pod \"coredns-668d6bf9bc-rrk5m\" (UID: \"ee23f997-e996-4265-8dd0-d6a808bc991e\") " pod="kube-system/coredns-668d6bf9bc-rrk5m" Nov 6 00:21:55.853471 kubelet[2686]: I1106 00:21:55.853435 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee23f997-e996-4265-8dd0-d6a808bc991e-config-volume\") pod \"coredns-668d6bf9bc-rrk5m\" (UID: \"ee23f997-e996-4265-8dd0-d6a808bc991e\") " pod="kube-system/coredns-668d6bf9bc-rrk5m" Nov 6 00:21:56.068222 kubelet[2686]: E1106 00:21:56.068064 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:56.070411 containerd[1558]: time="2025-11-06T00:21:56.070188149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dvrqp,Uid:90370422-3468-4938-ac6d-dcdf8dd21c0f,Namespace:kube-system,Attempt:0,}" Nov 6 00:21:56.082695 kubelet[2686]: E1106 00:21:56.080974 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:56.089433 containerd[1558]: time="2025-11-06T00:21:56.089181170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rrk5m,Uid:ee23f997-e996-4265-8dd0-d6a808bc991e,Namespace:kube-system,Attempt:0,}" Nov 6 00:21:56.302430 kubelet[2686]: E1106 00:21:56.302379 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:57.304781 kubelet[2686]: E1106 00:21:57.304669 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:57.968237 systemd-networkd[1436]: cilium_host: Link UP Nov 6 00:21:57.971082 systemd-networkd[1436]: cilium_net: Link UP Nov 6 00:21:57.971366 systemd-networkd[1436]: cilium_host: Gained carrier Nov 6 00:21:57.973698 systemd-networkd[1436]: cilium_net: Gained carrier Nov 6 00:21:58.165873 systemd-networkd[1436]: cilium_vxlan: Link UP Nov 6 00:21:58.165885 systemd-networkd[1436]: cilium_vxlan: Gained carrier Nov 6 00:21:58.308213 kubelet[2686]: E1106 00:21:58.307581 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:58.338791 systemd-networkd[1436]: cilium_host: Gained IPv6LL Nov 6 00:21:58.546883 systemd-networkd[1436]: cilium_net: Gained IPv6LL Nov 6 00:21:58.671629 kernel: NET: Registered PF_ALG protocol family Nov 6 00:21:59.721199 systemd-networkd[1436]: lxc_health: Link UP Nov 6 00:21:59.726093 kubelet[2686]: E1106 00:21:59.726048 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:21:59.726817 systemd-networkd[1436]: lxc_health: Gained carrier Nov 6 00:21:59.773605 kubelet[2686]: I1106 00:21:59.773500 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hm2gx" podStartSLOduration=10.330669921 podStartE2EDuration="21.773471947s" podCreationTimestamp="2025-11-06 00:21:38 +0000 UTC" firstStartedPulling="2025-11-06 00:21:39.876661983 +0000 UTC m=+6.107627094" lastFinishedPulling="2025-11-06 00:21:51.319464022 +0000 UTC m=+17.550429120" observedRunningTime="2025-11-06 00:21:56.33034984 +0000 UTC m=+22.561314967" watchObservedRunningTime="2025-11-06 00:21:59.773471947 +0000 UTC m=+26.004437078" Nov 6 00:22:00.084342 systemd-networkd[1436]: cilium_vxlan: Gained IPv6LL Nov 6 00:22:00.190452 systemd-networkd[1436]: lxcec0b96bef69b: Link UP Nov 6 00:22:00.194728 kernel: eth0: renamed from tmp66973 Nov 6 00:22:00.199675 systemd-networkd[1436]: lxcec0b96bef69b: Gained carrier Nov 6 00:22:00.225179 systemd-networkd[1436]: lxc1767d19bb67d: Link UP Nov 6 00:22:00.236158 kernel: eth0: renamed from tmpdbdaf Nov 6 00:22:00.246755 systemd-networkd[1436]: lxc1767d19bb67d: Gained carrier Nov 6 00:22:01.235926 systemd-networkd[1436]: lxc_health: Gained IPv6LL Nov 6 00:22:01.618830 systemd-networkd[1436]: lxcec0b96bef69b: Gained IPv6LL Nov 6 00:22:02.130866 systemd-networkd[1436]: lxc1767d19bb67d: Gained IPv6LL Nov 6 00:22:05.804133 containerd[1558]: time="2025-11-06T00:22:05.802321076Z" level=info msg="connecting to shim 66973bee36234fca6fdc44ffc95efaf0bd36be91214b0d8c945b9042c3bfd857" address="unix:///run/containerd/s/ee7c946ef11cef398a6110ebc23674913575aa1ebaf7b7eb94633d892e26b3b8" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:05.851930 systemd[1]: Started cri-containerd-66973bee36234fca6fdc44ffc95efaf0bd36be91214b0d8c945b9042c3bfd857.scope - libcontainer container 66973bee36234fca6fdc44ffc95efaf0bd36be91214b0d8c945b9042c3bfd857. Nov 6 00:22:05.880478 containerd[1558]: time="2025-11-06T00:22:05.880411403Z" level=info msg="connecting to shim dbdaff991e8e048ae5d574b72e15df78322617ac8bd7c530aacf8959e436f85f" address="unix:///run/containerd/s/637bffafe5ad5bd2e2d80533f4b3aef70fa37176a810e178481483bc90cd2a7c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:05.936986 systemd[1]: Started cri-containerd-dbdaff991e8e048ae5d574b72e15df78322617ac8bd7c530aacf8959e436f85f.scope - libcontainer container dbdaff991e8e048ae5d574b72e15df78322617ac8bd7c530aacf8959e436f85f. Nov 6 00:22:06.031956 containerd[1558]: time="2025-11-06T00:22:06.029539800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dvrqp,Uid:90370422-3468-4938-ac6d-dcdf8dd21c0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"66973bee36234fca6fdc44ffc95efaf0bd36be91214b0d8c945b9042c3bfd857\"" Nov 6 00:22:06.033435 kubelet[2686]: E1106 00:22:06.033384 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:06.042516 containerd[1558]: time="2025-11-06T00:22:06.042444898Z" level=info msg="CreateContainer within sandbox \"66973bee36234fca6fdc44ffc95efaf0bd36be91214b0d8c945b9042c3bfd857\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:22:06.065522 containerd[1558]: time="2025-11-06T00:22:06.064840575Z" level=info msg="Container 611c8db2821d0d1bd59456c0dfe8f6145f2222928e1bc3f50a09f6443aefa6cd: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:06.088830 containerd[1558]: time="2025-11-06T00:22:06.088748437Z" level=info msg="CreateContainer within sandbox \"66973bee36234fca6fdc44ffc95efaf0bd36be91214b0d8c945b9042c3bfd857\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"611c8db2821d0d1bd59456c0dfe8f6145f2222928e1bc3f50a09f6443aefa6cd\"" Nov 6 00:22:06.091877 containerd[1558]: time="2025-11-06T00:22:06.091832111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rrk5m,Uid:ee23f997-e996-4265-8dd0-d6a808bc991e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbdaff991e8e048ae5d574b72e15df78322617ac8bd7c530aacf8959e436f85f\"" Nov 6 00:22:06.093738 containerd[1558]: time="2025-11-06T00:22:06.093113421Z" level=info msg="StartContainer for \"611c8db2821d0d1bd59456c0dfe8f6145f2222928e1bc3f50a09f6443aefa6cd\"" Nov 6 00:22:06.094025 kubelet[2686]: E1106 00:22:06.093833 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:06.096483 containerd[1558]: time="2025-11-06T00:22:06.096367143Z" level=info msg="CreateContainer within sandbox \"dbdaff991e8e048ae5d574b72e15df78322617ac8bd7c530aacf8959e436f85f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:22:06.097489 containerd[1558]: time="2025-11-06T00:22:06.097445023Z" level=info msg="connecting to shim 611c8db2821d0d1bd59456c0dfe8f6145f2222928e1bc3f50a09f6443aefa6cd" address="unix:///run/containerd/s/ee7c946ef11cef398a6110ebc23674913575aa1ebaf7b7eb94633d892e26b3b8" protocol=ttrpc version=3 Nov 6 00:22:06.106231 containerd[1558]: time="2025-11-06T00:22:06.105438106Z" level=info msg="Container 9c5b1eddfda1cd62bc6a494a31038e89f5bcebed54199c74934607f117a460ec: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:06.124395 containerd[1558]: time="2025-11-06T00:22:06.124316253Z" level=info msg="CreateContainer within sandbox \"dbdaff991e8e048ae5d574b72e15df78322617ac8bd7c530aacf8959e436f85f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c5b1eddfda1cd62bc6a494a31038e89f5bcebed54199c74934607f117a460ec\"" Nov 6 00:22:06.128261 containerd[1558]: time="2025-11-06T00:22:06.128172259Z" level=info msg="StartContainer for \"9c5b1eddfda1cd62bc6a494a31038e89f5bcebed54199c74934607f117a460ec\"" Nov 6 00:22:06.134827 containerd[1558]: time="2025-11-06T00:22:06.134677435Z" level=info msg="connecting to shim 9c5b1eddfda1cd62bc6a494a31038e89f5bcebed54199c74934607f117a460ec" address="unix:///run/containerd/s/637bffafe5ad5bd2e2d80533f4b3aef70fa37176a810e178481483bc90cd2a7c" protocol=ttrpc version=3 Nov 6 00:22:06.143892 systemd[1]: Started cri-containerd-611c8db2821d0d1bd59456c0dfe8f6145f2222928e1bc3f50a09f6443aefa6cd.scope - libcontainer container 611c8db2821d0d1bd59456c0dfe8f6145f2222928e1bc3f50a09f6443aefa6cd. Nov 6 00:22:06.176978 systemd[1]: Started cri-containerd-9c5b1eddfda1cd62bc6a494a31038e89f5bcebed54199c74934607f117a460ec.scope - libcontainer container 9c5b1eddfda1cd62bc6a494a31038e89f5bcebed54199c74934607f117a460ec. Nov 6 00:22:06.236366 containerd[1558]: time="2025-11-06T00:22:06.236302002Z" level=info msg="StartContainer for \"611c8db2821d0d1bd59456c0dfe8f6145f2222928e1bc3f50a09f6443aefa6cd\" returns successfully" Nov 6 00:22:06.260071 containerd[1558]: time="2025-11-06T00:22:06.260003453Z" level=info msg="StartContainer for \"9c5b1eddfda1cd62bc6a494a31038e89f5bcebed54199c74934607f117a460ec\" returns successfully" Nov 6 00:22:06.344416 kubelet[2686]: E1106 00:22:06.344242 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:06.355896 kubelet[2686]: E1106 00:22:06.355809 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:06.379354 kubelet[2686]: I1106 00:22:06.379229 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rrk5m" podStartSLOduration=27.379200089 podStartE2EDuration="27.379200089s" podCreationTimestamp="2025-11-06 00:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:06.374894337 +0000 UTC m=+32.605859513" watchObservedRunningTime="2025-11-06 00:22:06.379200089 +0000 UTC m=+32.610165225" Nov 6 00:22:06.417086 kubelet[2686]: I1106 00:22:06.417012 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dvrqp" podStartSLOduration=27.416991797 podStartE2EDuration="27.416991797s" podCreationTimestamp="2025-11-06 00:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:06.414731018 +0000 UTC m=+32.645696150" watchObservedRunningTime="2025-11-06 00:22:06.416991797 +0000 UTC m=+32.647956926" Nov 6 00:22:06.795244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947400075.mount: Deactivated successfully. Nov 6 00:22:07.356744 kubelet[2686]: E1106 00:22:07.355865 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:07.357403 kubelet[2686]: E1106 00:22:07.357365 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:08.059838 kubelet[2686]: I1106 00:22:08.059774 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:22:08.060452 kubelet[2686]: E1106 00:22:08.060417 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:08.360339 kubelet[2686]: E1106 00:22:08.359863 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:08.360339 kubelet[2686]: E1106 00:22:08.360222 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:08.360339 kubelet[2686]: E1106 00:22:08.360325 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:16.728962 systemd[1]: Started sshd@7-147.182.203.34:22-139.178.68.195:42120.service - OpenSSH per-connection server daemon (139.178.68.195:42120). Nov 6 00:22:16.842132 sshd[4016]: Accepted publickey for core from 139.178.68.195 port 42120 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:16.845108 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:16.853392 systemd-logind[1528]: New session 8 of user core. Nov 6 00:22:16.873003 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:22:17.570633 sshd[4020]: Connection closed by 139.178.68.195 port 42120 Nov 6 00:22:17.571824 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:17.576929 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:22:17.577299 systemd[1]: sshd@7-147.182.203.34:22-139.178.68.195:42120.service: Deactivated successfully. Nov 6 00:22:17.580257 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:22:17.583357 systemd-logind[1528]: Removed session 8. Nov 6 00:22:22.591144 systemd[1]: Started sshd@8-147.182.203.34:22-139.178.68.195:42128.service - OpenSSH per-connection server daemon (139.178.68.195:42128). Nov 6 00:22:22.677966 sshd[4033]: Accepted publickey for core from 139.178.68.195 port 42128 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:22.680144 sshd-session[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:22.687243 systemd-logind[1528]: New session 9 of user core. Nov 6 00:22:22.695928 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:22:22.872513 sshd[4036]: Connection closed by 139.178.68.195 port 42128 Nov 6 00:22:22.871540 sshd-session[4033]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:22.876648 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:22:22.877796 systemd[1]: sshd@8-147.182.203.34:22-139.178.68.195:42128.service: Deactivated successfully. Nov 6 00:22:22.881982 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:22:22.887074 systemd-logind[1528]: Removed session 9. Nov 6 00:22:27.889787 systemd[1]: Started sshd@9-147.182.203.34:22-139.178.68.195:58368.service - OpenSSH per-connection server daemon (139.178.68.195:58368). Nov 6 00:22:27.967132 sshd[4048]: Accepted publickey for core from 139.178.68.195 port 58368 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:27.969600 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:27.978873 systemd-logind[1528]: New session 10 of user core. Nov 6 00:22:27.983258 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:22:28.136632 sshd[4051]: Connection closed by 139.178.68.195 port 58368 Nov 6 00:22:28.137280 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:28.142861 systemd[1]: sshd@9-147.182.203.34:22-139.178.68.195:58368.service: Deactivated successfully. Nov 6 00:22:28.145919 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:22:28.147325 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:22:28.149894 systemd-logind[1528]: Removed session 10. Nov 6 00:22:33.160065 systemd[1]: Started sshd@10-147.182.203.34:22-139.178.68.195:36288.service - OpenSSH per-connection server daemon (139.178.68.195:36288). Nov 6 00:22:33.257319 sshd[4064]: Accepted publickey for core from 139.178.68.195 port 36288 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:33.259379 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:33.267921 systemd-logind[1528]: New session 11 of user core. Nov 6 00:22:33.277938 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:22:33.470819 sshd[4067]: Connection closed by 139.178.68.195 port 36288 Nov 6 00:22:33.471910 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:33.492281 systemd[1]: sshd@10-147.182.203.34:22-139.178.68.195:36288.service: Deactivated successfully. Nov 6 00:22:33.496059 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:22:33.498062 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:22:33.506878 systemd[1]: Started sshd@11-147.182.203.34:22-139.178.68.195:36290.service - OpenSSH per-connection server daemon (139.178.68.195:36290). Nov 6 00:22:33.509803 systemd-logind[1528]: Removed session 11. Nov 6 00:22:33.595703 sshd[4080]: Accepted publickey for core from 139.178.68.195 port 36290 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:33.597775 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:33.606667 systemd-logind[1528]: New session 12 of user core. Nov 6 00:22:33.613978 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:22:33.839169 sshd[4083]: Connection closed by 139.178.68.195 port 36290 Nov 6 00:22:33.840248 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:33.855372 systemd[1]: sshd@11-147.182.203.34:22-139.178.68.195:36290.service: Deactivated successfully. Nov 6 00:22:33.860181 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:22:33.865863 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:22:33.875684 systemd[1]: Started sshd@12-147.182.203.34:22-139.178.68.195:36296.service - OpenSSH per-connection server daemon (139.178.68.195:36296). Nov 6 00:22:33.876948 systemd-logind[1528]: Removed session 12. Nov 6 00:22:33.960995 sshd[4093]: Accepted publickey for core from 139.178.68.195 port 36296 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:33.963124 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:33.972454 systemd-logind[1528]: New session 13 of user core. Nov 6 00:22:33.978914 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:22:34.166919 sshd[4096]: Connection closed by 139.178.68.195 port 36296 Nov 6 00:22:34.167944 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:34.175909 systemd[1]: sshd@12-147.182.203.34:22-139.178.68.195:36296.service: Deactivated successfully. Nov 6 00:22:34.179160 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:22:34.181883 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:22:34.183834 systemd-logind[1528]: Removed session 13. Nov 6 00:22:39.188728 systemd[1]: Started sshd@13-147.182.203.34:22-139.178.68.195:36308.service - OpenSSH per-connection server daemon (139.178.68.195:36308). Nov 6 00:22:39.280382 sshd[4111]: Accepted publickey for core from 139.178.68.195 port 36308 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:39.282316 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:39.290724 systemd-logind[1528]: New session 14 of user core. Nov 6 00:22:39.297947 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:22:39.488684 sshd[4114]: Connection closed by 139.178.68.195 port 36308 Nov 6 00:22:39.489940 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:39.497807 systemd[1]: sshd@13-147.182.203.34:22-139.178.68.195:36308.service: Deactivated successfully. Nov 6 00:22:39.501247 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:22:39.504084 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:22:39.506954 systemd-logind[1528]: Removed session 14. Nov 6 00:22:44.509519 systemd[1]: Started sshd@14-147.182.203.34:22-139.178.68.195:36250.service - OpenSSH per-connection server daemon (139.178.68.195:36250). Nov 6 00:22:44.618231 sshd[4128]: Accepted publickey for core from 139.178.68.195 port 36250 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:44.619826 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:44.627589 systemd-logind[1528]: New session 15 of user core. Nov 6 00:22:44.635107 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:22:44.819697 sshd[4131]: Connection closed by 139.178.68.195 port 36250 Nov 6 00:22:44.820298 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:44.826883 systemd[1]: sshd@14-147.182.203.34:22-139.178.68.195:36250.service: Deactivated successfully. Nov 6 00:22:44.830303 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:22:44.832718 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:22:44.835196 systemd-logind[1528]: Removed session 15. Nov 6 00:22:49.858520 systemd[1]: Started sshd@15-147.182.203.34:22-139.178.68.195:36254.service - OpenSSH per-connection server daemon (139.178.68.195:36254). Nov 6 00:22:49.947318 sshd[4143]: Accepted publickey for core from 139.178.68.195 port 36254 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:49.950906 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:49.958888 systemd-logind[1528]: New session 16 of user core. Nov 6 00:22:49.977894 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:22:50.028152 kubelet[2686]: E1106 00:22:50.028106 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:50.132863 sshd[4146]: Connection closed by 139.178.68.195 port 36254 Nov 6 00:22:50.133845 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:50.149637 systemd[1]: sshd@15-147.182.203.34:22-139.178.68.195:36254.service: Deactivated successfully. Nov 6 00:22:50.152909 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:22:50.154289 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:22:50.160004 systemd[1]: Started sshd@16-147.182.203.34:22-139.178.68.195:36260.service - OpenSSH per-connection server daemon (139.178.68.195:36260). Nov 6 00:22:50.161472 systemd-logind[1528]: Removed session 16. Nov 6 00:22:50.240131 sshd[4157]: Accepted publickey for core from 139.178.68.195 port 36260 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:50.242286 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:50.252271 systemd-logind[1528]: New session 17 of user core. Nov 6 00:22:50.258959 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:22:50.562263 sshd[4160]: Connection closed by 139.178.68.195 port 36260 Nov 6 00:22:50.563964 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:50.581542 systemd[1]: sshd@16-147.182.203.34:22-139.178.68.195:36260.service: Deactivated successfully. Nov 6 00:22:50.586011 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:22:50.588399 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:22:50.593402 systemd[1]: Started sshd@17-147.182.203.34:22-139.178.68.195:36264.service - OpenSSH per-connection server daemon (139.178.68.195:36264). Nov 6 00:22:50.594666 systemd-logind[1528]: Removed session 17. Nov 6 00:22:50.712811 sshd[4170]: Accepted publickey for core from 139.178.68.195 port 36264 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:50.714782 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:50.721528 systemd-logind[1528]: New session 18 of user core. Nov 6 00:22:50.736475 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:22:51.025199 kubelet[2686]: E1106 00:22:51.024602 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:22:51.458908 sshd[4173]: Connection closed by 139.178.68.195 port 36264 Nov 6 00:22:51.461791 sshd-session[4170]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:51.471256 systemd[1]: sshd@17-147.182.203.34:22-139.178.68.195:36264.service: Deactivated successfully. Nov 6 00:22:51.476766 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:22:51.480760 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:22:51.488964 systemd[1]: Started sshd@18-147.182.203.34:22-139.178.68.195:36280.service - OpenSSH per-connection server daemon (139.178.68.195:36280). Nov 6 00:22:51.491121 systemd-logind[1528]: Removed session 18. Nov 6 00:22:51.585376 sshd[4187]: Accepted publickey for core from 139.178.68.195 port 36280 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:51.587676 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:51.597943 systemd-logind[1528]: New session 19 of user core. Nov 6 00:22:51.604820 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:22:51.942876 sshd[4191]: Connection closed by 139.178.68.195 port 36280 Nov 6 00:22:51.943849 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:51.964446 systemd[1]: sshd@18-147.182.203.34:22-139.178.68.195:36280.service: Deactivated successfully. Nov 6 00:22:51.970786 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:22:51.973537 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:22:51.980229 systemd-logind[1528]: Removed session 19. Nov 6 00:22:51.982921 systemd[1]: Started sshd@19-147.182.203.34:22-139.178.68.195:36292.service - OpenSSH per-connection server daemon (139.178.68.195:36292). Nov 6 00:22:52.060467 sshd[4203]: Accepted publickey for core from 139.178.68.195 port 36292 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:52.062108 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:52.070365 systemd-logind[1528]: New session 20 of user core. Nov 6 00:22:52.072856 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:22:52.239689 sshd[4206]: Connection closed by 139.178.68.195 port 36292 Nov 6 00:22:52.240610 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:52.247228 systemd[1]: sshd@19-147.182.203.34:22-139.178.68.195:36292.service: Deactivated successfully. Nov 6 00:22:52.250802 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:22:52.252131 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:22:52.254800 systemd-logind[1528]: Removed session 20. Nov 6 00:22:57.263880 systemd[1]: Started sshd@20-147.182.203.34:22-139.178.68.195:55976.service - OpenSSH per-connection server daemon (139.178.68.195:55976). Nov 6 00:22:57.349280 sshd[4219]: Accepted publickey for core from 139.178.68.195 port 55976 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:22:57.351813 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:57.360211 systemd-logind[1528]: New session 21 of user core. Nov 6 00:22:57.367920 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:22:57.549282 sshd[4222]: Connection closed by 139.178.68.195 port 55976 Nov 6 00:22:57.550214 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:57.556668 systemd[1]: sshd@20-147.182.203.34:22-139.178.68.195:55976.service: Deactivated successfully. Nov 6 00:22:57.560378 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:22:57.562889 systemd-logind[1528]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:22:57.565700 systemd-logind[1528]: Removed session 21. Nov 6 00:22:59.025024 kubelet[2686]: E1106 00:22:59.024960 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:02.567142 systemd[1]: Started sshd@21-147.182.203.34:22-139.178.68.195:55992.service - OpenSSH per-connection server daemon (139.178.68.195:55992). Nov 6 00:23:02.676227 sshd[4235]: Accepted publickey for core from 139.178.68.195 port 55992 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:23:02.678480 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:02.687356 systemd-logind[1528]: New session 22 of user core. Nov 6 00:23:02.692911 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:23:02.916835 sshd[4238]: Connection closed by 139.178.68.195 port 55992 Nov 6 00:23:02.917060 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:02.926983 systemd[1]: sshd@21-147.182.203.34:22-139.178.68.195:55992.service: Deactivated successfully. Nov 6 00:23:02.931145 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:23:02.934131 systemd-logind[1528]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:23:02.937357 systemd-logind[1528]: Removed session 22. Nov 6 00:23:06.024924 kubelet[2686]: E1106 00:23:06.024582 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:07.935980 systemd[1]: Started sshd@22-147.182.203.34:22-139.178.68.195:32954.service - OpenSSH per-connection server daemon (139.178.68.195:32954). Nov 6 00:23:08.032171 sshd[4249]: Accepted publickey for core from 139.178.68.195 port 32954 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:23:08.036208 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:08.044923 systemd-logind[1528]: New session 23 of user core. Nov 6 00:23:08.049827 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:23:08.206854 sshd[4252]: Connection closed by 139.178.68.195 port 32954 Nov 6 00:23:08.208210 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:08.215751 systemd-logind[1528]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:23:08.216644 systemd[1]: sshd@22-147.182.203.34:22-139.178.68.195:32954.service: Deactivated successfully. Nov 6 00:23:08.219700 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:23:08.223338 systemd-logind[1528]: Removed session 23. Nov 6 00:23:09.024640 kubelet[2686]: E1106 00:23:09.024532 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:10.024253 kubelet[2686]: E1106 00:23:10.024176 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:13.224338 systemd[1]: Started sshd@23-147.182.203.34:22-139.178.68.195:36722.service - OpenSSH per-connection server daemon (139.178.68.195:36722). Nov 6 00:23:13.314455 sshd[4265]: Accepted publickey for core from 139.178.68.195 port 36722 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:23:13.317145 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:13.324787 systemd-logind[1528]: New session 24 of user core. Nov 6 00:23:13.328889 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:23:13.531241 sshd[4268]: Connection closed by 139.178.68.195 port 36722 Nov 6 00:23:13.531086 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:13.548463 systemd[1]: sshd@23-147.182.203.34:22-139.178.68.195:36722.service: Deactivated successfully. Nov 6 00:23:13.552323 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:23:13.554601 systemd-logind[1528]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:23:13.559719 systemd-logind[1528]: Removed session 24. Nov 6 00:23:13.562620 systemd[1]: Started sshd@24-147.182.203.34:22-139.178.68.195:36728.service - OpenSSH per-connection server daemon (139.178.68.195:36728). Nov 6 00:23:13.658590 sshd[4279]: Accepted publickey for core from 139.178.68.195 port 36728 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:23:13.661495 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:13.671075 systemd-logind[1528]: New session 25 of user core. Nov 6 00:23:13.676126 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:23:14.026351 kubelet[2686]: E1106 00:23:14.024873 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:15.178115 containerd[1558]: time="2025-11-06T00:23:15.178053825Z" level=info msg="StopContainer for \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" with timeout 30 (s)" Nov 6 00:23:15.193858 containerd[1558]: time="2025-11-06T00:23:15.193784628Z" level=info msg="Stop container \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" with signal terminated" Nov 6 00:23:15.216425 containerd[1558]: time="2025-11-06T00:23:15.216360596Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:23:15.224443 containerd[1558]: time="2025-11-06T00:23:15.224372142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" id:\"367bef7d8370361e4ecdc33953e54ed999609fb5a52127a4a77c61144c827ef6\" pid:4300 exited_at:{seconds:1762388595 nanos:223683292}" Nov 6 00:23:15.229749 systemd[1]: cri-containerd-af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2.scope: Deactivated successfully. Nov 6 00:23:15.230430 systemd[1]: cri-containerd-af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2.scope: Consumed 508ms CPU time, 26.7M memory peak, 3M read from disk, 4K written to disk. Nov 6 00:23:15.233016 containerd[1558]: time="2025-11-06T00:23:15.232929985Z" level=info msg="received exit event container_id:\"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" id:\"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" pid:3094 exited_at:{seconds:1762388595 nanos:231440245}" Nov 6 00:23:15.237087 containerd[1558]: time="2025-11-06T00:23:15.236848878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" id:\"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" pid:3094 exited_at:{seconds:1762388595 nanos:231440245}" Nov 6 00:23:15.247550 containerd[1558]: time="2025-11-06T00:23:15.247432504Z" level=info msg="StopContainer for \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" with timeout 2 (s)" Nov 6 00:23:15.248358 containerd[1558]: time="2025-11-06T00:23:15.248318157Z" level=info msg="Stop container \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" with signal terminated" Nov 6 00:23:15.268402 systemd-networkd[1436]: lxc_health: Link DOWN Nov 6 00:23:15.268444 systemd-networkd[1436]: lxc_health: Lost carrier Nov 6 00:23:15.315461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2-rootfs.mount: Deactivated successfully. Nov 6 00:23:15.318854 systemd[1]: cri-containerd-f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1.scope: Deactivated successfully. Nov 6 00:23:15.321121 systemd[1]: cri-containerd-f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1.scope: Consumed 9.627s CPU time, 163.9M memory peak, 40M read from disk, 13.3M written to disk. Nov 6 00:23:15.322292 containerd[1558]: time="2025-11-06T00:23:15.321301458Z" level=info msg="received exit event container_id:\"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" id:\"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" pid:3339 exited_at:{seconds:1762388595 nanos:320023071}" Nov 6 00:23:15.324940 containerd[1558]: time="2025-11-06T00:23:15.324501520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" id:\"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" pid:3339 exited_at:{seconds:1762388595 nanos:320023071}" Nov 6 00:23:15.332043 containerd[1558]: time="2025-11-06T00:23:15.331961240Z" level=info msg="StopContainer for \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" returns successfully" Nov 6 00:23:15.334048 containerd[1558]: time="2025-11-06T00:23:15.333716857Z" level=info msg="StopPodSandbox for \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\"" Nov 6 00:23:15.334048 containerd[1558]: time="2025-11-06T00:23:15.333800668Z" level=info msg="Container to stop \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:23:15.352580 systemd[1]: cri-containerd-fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92.scope: Deactivated successfully. Nov 6 00:23:15.356536 containerd[1558]: time="2025-11-06T00:23:15.356475872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" id:\"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" pid:2790 exit_status:137 exited_at:{seconds:1762388595 nanos:353088544}" Nov 6 00:23:15.376935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1-rootfs.mount: Deactivated successfully. Nov 6 00:23:15.391151 containerd[1558]: time="2025-11-06T00:23:15.390993811Z" level=info msg="StopContainer for \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" returns successfully" Nov 6 00:23:15.392595 containerd[1558]: time="2025-11-06T00:23:15.392419224Z" level=info msg="StopPodSandbox for \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\"" Nov 6 00:23:15.392595 containerd[1558]: time="2025-11-06T00:23:15.392514723Z" level=info msg="Container to stop \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:23:15.392595 containerd[1558]: time="2025-11-06T00:23:15.392526984Z" level=info msg="Container to stop \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:23:15.392595 containerd[1558]: time="2025-11-06T00:23:15.392535716Z" level=info msg="Container to stop \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:23:15.392911 containerd[1558]: time="2025-11-06T00:23:15.392704578Z" level=info msg="Container to stop \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:23:15.392911 containerd[1558]: time="2025-11-06T00:23:15.392725857Z" level=info msg="Container to stop \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 00:23:15.402483 systemd[1]: cri-containerd-a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040.scope: Deactivated successfully. Nov 6 00:23:15.422658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92-rootfs.mount: Deactivated successfully. Nov 6 00:23:15.426053 containerd[1558]: time="2025-11-06T00:23:15.425930805Z" level=info msg="shim disconnected" id=fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92 namespace=k8s.io Nov 6 00:23:15.426053 containerd[1558]: time="2025-11-06T00:23:15.425986731Z" level=warning msg="cleaning up after shim disconnected" id=fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92 namespace=k8s.io Nov 6 00:23:15.454436 containerd[1558]: time="2025-11-06T00:23:15.425996029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:23:15.474755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040-rootfs.mount: Deactivated successfully. Nov 6 00:23:15.480701 containerd[1558]: time="2025-11-06T00:23:15.480283022Z" level=info msg="shim disconnected" id=a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040 namespace=k8s.io Nov 6 00:23:15.480701 containerd[1558]: time="2025-11-06T00:23:15.480322967Z" level=warning msg="cleaning up after shim disconnected" id=a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040 namespace=k8s.io Nov 6 00:23:15.480701 containerd[1558]: time="2025-11-06T00:23:15.480341465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 00:23:15.501305 containerd[1558]: time="2025-11-06T00:23:15.501246161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" id:\"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" pid:2880 exit_status:137 exited_at:{seconds:1762388595 nanos:409949737}" Nov 6 00:23:15.501485 containerd[1558]: time="2025-11-06T00:23:15.501437105Z" level=info msg="received exit event sandbox_id:\"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" exit_status:137 exited_at:{seconds:1762388595 nanos:409949737}" Nov 6 00:23:15.503195 containerd[1558]: time="2025-11-06T00:23:15.502740042Z" level=info msg="received exit event sandbox_id:\"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" exit_status:137 exited_at:{seconds:1762388595 nanos:353088544}" Nov 6 00:23:15.504731 containerd[1558]: time="2025-11-06T00:23:15.503550488Z" level=info msg="TearDown network for sandbox \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" successfully" Nov 6 00:23:15.504964 containerd[1558]: time="2025-11-06T00:23:15.504708409Z" level=info msg="StopPodSandbox for \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" returns successfully" Nov 6 00:23:15.507084 containerd[1558]: time="2025-11-06T00:23:15.506988366Z" level=info msg="TearDown network for sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" successfully" Nov 6 00:23:15.507084 containerd[1558]: time="2025-11-06T00:23:15.507034079Z" level=info msg="StopPodSandbox for \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" returns successfully" Nov 6 00:23:15.507310 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92-shm.mount: Deactivated successfully. Nov 6 00:23:15.566386 kubelet[2686]: I1106 00:23:15.566017 2686 scope.go:117] "RemoveContainer" containerID="f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1" Nov 6 00:23:15.574322 containerd[1558]: time="2025-11-06T00:23:15.573651989Z" level=info msg="RemoveContainer for \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\"" Nov 6 00:23:15.585451 containerd[1558]: time="2025-11-06T00:23:15.585390360Z" level=info msg="RemoveContainer for \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" returns successfully" Nov 6 00:23:15.594064 kubelet[2686]: I1106 00:23:15.593891 2686 scope.go:117] "RemoveContainer" containerID="e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c" Nov 6 00:23:15.604843 containerd[1558]: time="2025-11-06T00:23:15.602168155Z" level=info msg="RemoveContainer for \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\"" Nov 6 00:23:15.609800 containerd[1558]: time="2025-11-06T00:23:15.609718976Z" level=info msg="RemoveContainer for \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\" returns successfully" Nov 6 00:23:15.610242 kubelet[2686]: I1106 00:23:15.610223 2686 scope.go:117] "RemoveContainer" containerID="4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a" Nov 6 00:23:15.614591 containerd[1558]: time="2025-11-06T00:23:15.614013418Z" level=info msg="RemoveContainer for \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\"" Nov 6 00:23:15.619392 containerd[1558]: time="2025-11-06T00:23:15.619337800Z" level=info msg="RemoveContainer for \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\" returns successfully" Nov 6 00:23:15.620070 kubelet[2686]: I1106 00:23:15.620043 2686 scope.go:117] "RemoveContainer" containerID="c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af" Nov 6 00:23:15.622798 containerd[1558]: time="2025-11-06T00:23:15.622757816Z" level=info msg="RemoveContainer for \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\"" Nov 6 00:23:15.631703 containerd[1558]: time="2025-11-06T00:23:15.631635972Z" level=info msg="RemoveContainer for \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\" returns successfully" Nov 6 00:23:15.631998 kubelet[2686]: I1106 00:23:15.631963 2686 scope.go:117] "RemoveContainer" containerID="9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a" Nov 6 00:23:15.634734 containerd[1558]: time="2025-11-06T00:23:15.634692713Z" level=info msg="RemoveContainer for \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\"" Nov 6 00:23:15.639549 containerd[1558]: time="2025-11-06T00:23:15.639481991Z" level=info msg="RemoveContainer for \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\" returns successfully" Nov 6 00:23:15.640544 kubelet[2686]: I1106 00:23:15.640501 2686 scope.go:117] "RemoveContainer" containerID="f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1" Nov 6 00:23:15.643471 containerd[1558]: time="2025-11-06T00:23:15.641182330Z" level=error msg="ContainerStatus for \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\": not found" Nov 6 00:23:15.643867 kubelet[2686]: E1106 00:23:15.643820 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\": not found" containerID="f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1" Nov 6 00:23:15.644110 kubelet[2686]: I1106 00:23:15.643996 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1"} err="failed to get container status \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4af36423247b619e555a90d2b95a45f7e1c7320b93bddcd7b76f40ccd983cd1\": not found" Nov 6 00:23:15.644239 kubelet[2686]: I1106 00:23:15.644168 2686 scope.go:117] "RemoveContainer" containerID="e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c" Nov 6 00:23:15.644663 containerd[1558]: time="2025-11-06T00:23:15.644627469Z" level=error msg="ContainerStatus for \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\": not found" Nov 6 00:23:15.644892 kubelet[2686]: E1106 00:23:15.644873 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\": not found" containerID="e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c" Nov 6 00:23:15.645006 kubelet[2686]: I1106 00:23:15.644973 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c"} err="failed to get container status \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e16b7e44e4d0a29100ce3771bf75dea3f5a88a594b2893b91bf23ff72e72ec3c\": not found" Nov 6 00:23:15.645148 kubelet[2686]: I1106 00:23:15.645043 2686 scope.go:117] "RemoveContainer" containerID="4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a" Nov 6 00:23:15.645457 containerd[1558]: time="2025-11-06T00:23:15.645421029Z" level=error msg="ContainerStatus for \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\": not found" Nov 6 00:23:15.645651 kubelet[2686]: E1106 00:23:15.645633 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\": not found" containerID="4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a" Nov 6 00:23:15.645796 kubelet[2686]: I1106 00:23:15.645739 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a"} err="failed to get container status \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a822307ff83c4bd59d18f041668d37943eae18fe179ec8e064cd471c6ea576a\": not found" Nov 6 00:23:15.645796 kubelet[2686]: I1106 00:23:15.645771 2686 scope.go:117] "RemoveContainer" containerID="c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af" Nov 6 00:23:15.646103 containerd[1558]: time="2025-11-06T00:23:15.646076982Z" level=error msg="ContainerStatus for \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\": not found" Nov 6 00:23:15.646382 kubelet[2686]: E1106 00:23:15.646358 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\": not found" containerID="c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af" Nov 6 00:23:15.646478 kubelet[2686]: I1106 00:23:15.646462 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af"} err="failed to get container status \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6da56dc5869343691c96b67ec0c98b46cdf7b137143268bb3b7621e95e0b5af\": not found" Nov 6 00:23:15.646656 kubelet[2686]: I1106 00:23:15.646644 2686 scope.go:117] "RemoveContainer" containerID="9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a" Nov 6 00:23:15.647024 containerd[1558]: time="2025-11-06T00:23:15.646968862Z" level=error msg="ContainerStatus for \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\": not found" Nov 6 00:23:15.647219 kubelet[2686]: E1106 00:23:15.647187 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\": not found" containerID="9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a" Nov 6 00:23:15.647296 kubelet[2686]: I1106 00:23:15.647229 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a"} err="failed to get container status \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b985f747dcad351b8f5ec1a466176e534e6f6453d6c62d0670aa1889ae9712a\": not found" Nov 6 00:23:15.647296 kubelet[2686]: I1106 00:23:15.647258 2686 scope.go:117] "RemoveContainer" containerID="af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2" Nov 6 00:23:15.649387 containerd[1558]: time="2025-11-06T00:23:15.649333329Z" level=info msg="RemoveContainer for \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\"" Nov 6 00:23:15.653601 containerd[1558]: time="2025-11-06T00:23:15.653507540Z" level=info msg="RemoveContainer for \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" returns successfully" Nov 6 00:23:15.654035 kubelet[2686]: I1106 00:23:15.654007 2686 scope.go:117] "RemoveContainer" containerID="af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2" Nov 6 00:23:15.654577 containerd[1558]: time="2025-11-06T00:23:15.654511435Z" level=error msg="ContainerStatus for \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\": not found" Nov 6 00:23:15.654852 kubelet[2686]: E1106 00:23:15.654826 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\": not found" containerID="af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2" Nov 6 00:23:15.654965 kubelet[2686]: I1106 00:23:15.654942 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2"} err="failed to get container status \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"af3a07b06e6a345478af659d4ac7fd2f7fd53542590ab62e5a688e791b196bb2\": not found" Nov 6 00:23:15.661579 kubelet[2686]: I1106 00:23:15.661477 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.665752 kubelet[2686]: I1106 00:23:15.665101 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-host-proc-sys-net\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.665752 kubelet[2686]: I1106 00:23:15.665188 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-cgroup\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.665752 kubelet[2686]: I1106 00:23:15.665219 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-etc-cni-netd\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.665752 kubelet[2686]: I1106 00:23:15.665236 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-run\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.665752 kubelet[2686]: I1106 00:23:15.665254 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-hostproc\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.665752 kubelet[2686]: I1106 00:23:15.665268 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-lib-modules\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666053 kubelet[2686]: I1106 00:23:15.665289 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-xtables-lock\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666053 kubelet[2686]: I1106 00:23:15.665262 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.666053 kubelet[2686]: I1106 00:23:15.665315 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9e0a8c3-2891-4c61-b5e2-42842480f843-cilium-config-path\") pod \"a9e0a8c3-2891-4c61-b5e2-42842480f843\" (UID: \"a9e0a8c3-2891-4c61-b5e2-42842480f843\") " Nov 6 00:23:15.666053 kubelet[2686]: I1106 00:23:15.665333 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-hostproc" (OuterVolumeSpecName: "hostproc") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.666053 kubelet[2686]: I1106 00:23:15.665339 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/326ba752-dd09-42d1-82e8-2bf0fef820b9-clustermesh-secrets\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666053 kubelet[2686]: I1106 00:23:15.665356 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-hubble-tls\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666245 kubelet[2686]: I1106 00:23:15.665349 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.666245 kubelet[2686]: I1106 00:23:15.665374 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-bpf-maps\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666245 kubelet[2686]: I1106 00:23:15.665382 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.666245 kubelet[2686]: I1106 00:23:15.665393 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5vn7\" (UniqueName: \"kubernetes.io/projected/a9e0a8c3-2891-4c61-b5e2-42842480f843-kube-api-access-r5vn7\") pod \"a9e0a8c3-2891-4c61-b5e2-42842480f843\" (UID: \"a9e0a8c3-2891-4c61-b5e2-42842480f843\") " Nov 6 00:23:15.666245 kubelet[2686]: I1106 00:23:15.665413 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-config-path\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666389 kubelet[2686]: I1106 00:23:15.665434 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-host-proc-sys-kernel\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666389 kubelet[2686]: I1106 00:23:15.665467 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cni-path\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666389 kubelet[2686]: I1106 00:23:15.665491 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxdnx\" (UniqueName: \"kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-kube-api-access-sxdnx\") pod \"326ba752-dd09-42d1-82e8-2bf0fef820b9\" (UID: \"326ba752-dd09-42d1-82e8-2bf0fef820b9\") " Nov 6 00:23:15.666389 kubelet[2686]: I1106 00:23:15.665537 2686 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-host-proc-sys-net\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.666998 kubelet[2686]: I1106 00:23:15.665549 2686 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-hostproc\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.666998 kubelet[2686]: I1106 00:23:15.666691 2686 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-cgroup\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.666998 kubelet[2686]: I1106 00:23:15.666710 2686 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-etc-cni-netd\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.666998 kubelet[2686]: I1106 00:23:15.666725 2686 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-run\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.668804 kubelet[2686]: I1106 00:23:15.668723 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.670621 kubelet[2686]: I1106 00:23:15.668937 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.671127 kubelet[2686]: I1106 00:23:15.671092 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9e0a8c3-2891-4c61-b5e2-42842480f843-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a9e0a8c3-2891-4c61-b5e2-42842480f843" (UID: "a9e0a8c3-2891-4c61-b5e2-42842480f843"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:23:15.674997 kubelet[2686]: I1106 00:23:15.674945 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.675151 kubelet[2686]: I1106 00:23:15.675010 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cni-path" (OuterVolumeSpecName: "cni-path") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.678784 kubelet[2686]: I1106 00:23:15.678723 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 00:23:15.682356 kubelet[2686]: I1106 00:23:15.682238 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e0a8c3-2891-4c61-b5e2-42842480f843-kube-api-access-r5vn7" (OuterVolumeSpecName: "kube-api-access-r5vn7") pod "a9e0a8c3-2891-4c61-b5e2-42842480f843" (UID: "a9e0a8c3-2891-4c61-b5e2-42842480f843"). InnerVolumeSpecName "kube-api-access-r5vn7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:23:15.685527 kubelet[2686]: I1106 00:23:15.684305 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:23:15.688759 kubelet[2686]: I1106 00:23:15.688624 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-kube-api-access-sxdnx" (OuterVolumeSpecName: "kube-api-access-sxdnx") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "kube-api-access-sxdnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:23:15.689591 kubelet[2686]: I1106 00:23:15.689523 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/326ba752-dd09-42d1-82e8-2bf0fef820b9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:23:15.691726 kubelet[2686]: I1106 00:23:15.691662 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "326ba752-dd09-42d1-82e8-2bf0fef820b9" (UID: "326ba752-dd09-42d1-82e8-2bf0fef820b9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:23:15.767313 kubelet[2686]: I1106 00:23:15.767139 2686 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/326ba752-dd09-42d1-82e8-2bf0fef820b9-clustermesh-secrets\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767313 kubelet[2686]: I1106 00:23:15.767195 2686 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-hubble-tls\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767313 kubelet[2686]: I1106 00:23:15.767207 2686 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-xtables-lock\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767313 kubelet[2686]: I1106 00:23:15.767223 2686 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9e0a8c3-2891-4c61-b5e2-42842480f843-cilium-config-path\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767313 kubelet[2686]: I1106 00:23:15.767239 2686 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-bpf-maps\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767313 kubelet[2686]: I1106 00:23:15.767250 2686 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r5vn7\" (UniqueName: \"kubernetes.io/projected/a9e0a8c3-2891-4c61-b5e2-42842480f843-kube-api-access-r5vn7\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767313 kubelet[2686]: I1106 00:23:15.767262 2686 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/326ba752-dd09-42d1-82e8-2bf0fef820b9-cilium-config-path\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767313 kubelet[2686]: I1106 00:23:15.767272 2686 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-host-proc-sys-kernel\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767732 kubelet[2686]: I1106 00:23:15.767281 2686 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-cni-path\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767732 kubelet[2686]: I1106 00:23:15.767290 2686 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sxdnx\" (UniqueName: \"kubernetes.io/projected/326ba752-dd09-42d1-82e8-2bf0fef820b9-kube-api-access-sxdnx\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.767915 kubelet[2686]: I1106 00:23:15.767830 2686 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/326ba752-dd09-42d1-82e8-2bf0fef820b9-lib-modules\") on node \"ci-4459.1.0-n-800cd2f73d\" DevicePath \"\"" Nov 6 00:23:15.888110 systemd[1]: Removed slice kubepods-besteffort-poda9e0a8c3_2891_4c61_b5e2_42842480f843.slice - libcontainer container kubepods-besteffort-poda9e0a8c3_2891_4c61_b5e2_42842480f843.slice. Nov 6 00:23:15.888273 systemd[1]: kubepods-besteffort-poda9e0a8c3_2891_4c61_b5e2_42842480f843.slice: Consumed 552ms CPU time, 27M memory peak, 3M read from disk, 4K written to disk. Nov 6 00:23:15.895964 systemd[1]: Removed slice kubepods-burstable-pod326ba752_dd09_42d1_82e8_2bf0fef820b9.slice - libcontainer container kubepods-burstable-pod326ba752_dd09_42d1_82e8_2bf0fef820b9.slice. Nov 6 00:23:15.896133 systemd[1]: kubepods-burstable-pod326ba752_dd09_42d1_82e8_2bf0fef820b9.slice: Consumed 9.754s CPU time, 164.2M memory peak, 40M read from disk, 13.3M written to disk. Nov 6 00:23:16.028773 kubelet[2686]: I1106 00:23:16.028277 2686 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="326ba752-dd09-42d1-82e8-2bf0fef820b9" path="/var/lib/kubelet/pods/326ba752-dd09-42d1-82e8-2bf0fef820b9/volumes" Nov 6 00:23:16.029333 kubelet[2686]: I1106 00:23:16.029261 2686 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e0a8c3-2891-4c61-b5e2-42842480f843" path="/var/lib/kubelet/pods/a9e0a8c3-2891-4c61-b5e2-42842480f843/volumes" Nov 6 00:23:16.313599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040-shm.mount: Deactivated successfully. Nov 6 00:23:16.313752 systemd[1]: var-lib-kubelet-pods-326ba752\x2ddd09\x2d42d1\x2d82e8\x2d2bf0fef820b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsxdnx.mount: Deactivated successfully. Nov 6 00:23:16.313834 systemd[1]: var-lib-kubelet-pods-a9e0a8c3\x2d2891\x2d4c61\x2db5e2\x2d42842480f843-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr5vn7.mount: Deactivated successfully. Nov 6 00:23:16.313937 systemd[1]: var-lib-kubelet-pods-326ba752\x2ddd09\x2d42d1\x2d82e8\x2d2bf0fef820b9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 00:23:16.314007 systemd[1]: var-lib-kubelet-pods-326ba752\x2ddd09\x2d42d1\x2d82e8\x2d2bf0fef820b9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 00:23:17.089638 sshd[4282]: Connection closed by 139.178.68.195 port 36728 Nov 6 00:23:17.090854 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:17.107028 systemd[1]: sshd@24-147.182.203.34:22-139.178.68.195:36728.service: Deactivated successfully. Nov 6 00:23:17.110245 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:23:17.112186 systemd-logind[1528]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:23:17.117964 systemd[1]: Started sshd@25-147.182.203.34:22-139.178.68.195:36734.service - OpenSSH per-connection server daemon (139.178.68.195:36734). Nov 6 00:23:17.119723 systemd-logind[1528]: Removed session 25. Nov 6 00:23:17.203015 sshd[4433]: Accepted publickey for core from 139.178.68.195 port 36734 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:23:17.205029 sshd-session[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:17.212289 systemd-logind[1528]: New session 26 of user core. Nov 6 00:23:17.216059 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:23:18.386711 sshd[4436]: Connection closed by 139.178.68.195 port 36734 Nov 6 00:23:18.388868 sshd-session[4433]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:18.408735 systemd[1]: sshd@25-147.182.203.34:22-139.178.68.195:36734.service: Deactivated successfully. Nov 6 00:23:18.414241 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:23:18.422885 systemd-logind[1528]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:23:18.432084 systemd[1]: Started sshd@26-147.182.203.34:22-139.178.68.195:36750.service - OpenSSH per-connection server daemon (139.178.68.195:36750). Nov 6 00:23:18.436201 systemd-logind[1528]: Removed session 26. Nov 6 00:23:18.528705 kubelet[2686]: I1106 00:23:18.527896 2686 memory_manager.go:355] "RemoveStaleState removing state" podUID="a9e0a8c3-2891-4c61-b5e2-42842480f843" containerName="cilium-operator" Nov 6 00:23:18.529876 kubelet[2686]: I1106 00:23:18.529353 2686 memory_manager.go:355] "RemoveStaleState removing state" podUID="326ba752-dd09-42d1-82e8-2bf0fef820b9" containerName="cilium-agent" Nov 6 00:23:18.553825 systemd[1]: Created slice kubepods-burstable-pod4f77f827_4c7e_4e45_8c96_ccaa32a76410.slice - libcontainer container kubepods-burstable-pod4f77f827_4c7e_4e45_8c96_ccaa32a76410.slice. Nov 6 00:23:18.568509 sshd[4446]: Accepted publickey for core from 139.178.68.195 port 36750 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:23:18.572922 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:18.580902 systemd-logind[1528]: New session 27 of user core. Nov 6 00:23:18.588447 kubelet[2686]: I1106 00:23:18.587430 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f77f827-4c7e-4e45-8c96-ccaa32a76410-cilium-ipsec-secrets\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588447 kubelet[2686]: I1106 00:23:18.587495 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-host-proc-sys-net\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588447 kubelet[2686]: I1106 00:23:18.587528 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f77f827-4c7e-4e45-8c96-ccaa32a76410-hubble-tls\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588447 kubelet[2686]: I1106 00:23:18.587588 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-cni-path\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588447 kubelet[2686]: I1106 00:23:18.587620 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-hostproc\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588447 kubelet[2686]: I1106 00:23:18.587650 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-host-proc-sys-kernel\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588863 kubelet[2686]: I1106 00:23:18.587679 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-etc-cni-netd\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588863 kubelet[2686]: I1106 00:23:18.587710 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f77f827-4c7e-4e45-8c96-ccaa32a76410-clustermesh-secrets\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588863 kubelet[2686]: I1106 00:23:18.587847 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-xtables-lock\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588863 kubelet[2686]: I1106 00:23:18.587881 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-cilium-run\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588863 kubelet[2686]: I1106 00:23:18.587906 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-cilium-cgroup\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.588863 kubelet[2686]: I1106 00:23:18.587939 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-lib-modules\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.592221 kubelet[2686]: I1106 00:23:18.587969 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f77f827-4c7e-4e45-8c96-ccaa32a76410-bpf-maps\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.592221 kubelet[2686]: I1106 00:23:18.588051 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f77f827-4c7e-4e45-8c96-ccaa32a76410-cilium-config-path\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.592221 kubelet[2686]: I1106 00:23:18.588084 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddf6r\" (UniqueName: \"kubernetes.io/projected/4f77f827-4c7e-4e45-8c96-ccaa32a76410-kube-api-access-ddf6r\") pod \"cilium-s6k5m\" (UID: \"4f77f827-4c7e-4e45-8c96-ccaa32a76410\") " pod="kube-system/cilium-s6k5m" Nov 6 00:23:18.589256 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 00:23:18.655514 sshd[4449]: Connection closed by 139.178.68.195 port 36750 Nov 6 00:23:18.657092 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:18.671379 systemd[1]: sshd@26-147.182.203.34:22-139.178.68.195:36750.service: Deactivated successfully. Nov 6 00:23:18.677073 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 00:23:18.679953 systemd-logind[1528]: Session 27 logged out. Waiting for processes to exit. Nov 6 00:23:18.683667 systemd-logind[1528]: Removed session 27. Nov 6 00:23:18.686587 systemd[1]: Started sshd@27-147.182.203.34:22-139.178.68.195:36756.service - OpenSSH per-connection server daemon (139.178.68.195:36756). Nov 6 00:23:18.808432 sshd[4456]: Accepted publickey for core from 139.178.68.195 port 36756 ssh2: RSA SHA256:aGxtOxRPrSuR65m5qK/D9Z1P98HLz2sHQoDCktl9SWw Nov 6 00:23:18.810255 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:18.816762 systemd-logind[1528]: New session 28 of user core. Nov 6 00:23:18.827929 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 00:23:18.861078 kubelet[2686]: E1106 00:23:18.861026 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:18.861793 containerd[1558]: time="2025-11-06T00:23:18.861739389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s6k5m,Uid:4f77f827-4c7e-4e45-8c96-ccaa32a76410,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:18.901953 containerd[1558]: time="2025-11-06T00:23:18.900846518Z" level=info msg="connecting to shim 1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436" address="unix:///run/containerd/s/c2e9b6dad8a8fd490c746684a9ccd8a10c5d877153d1c9735a9c7d9915ae2149" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:18.951955 systemd[1]: Started cri-containerd-1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436.scope - libcontainer container 1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436. Nov 6 00:23:19.019606 containerd[1558]: time="2025-11-06T00:23:19.018082900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s6k5m,Uid:4f77f827-4c7e-4e45-8c96-ccaa32a76410,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\"" Nov 6 00:23:19.021662 kubelet[2686]: E1106 00:23:19.021605 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:19.027222 containerd[1558]: time="2025-11-06T00:23:19.027164452Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 00:23:19.040327 containerd[1558]: time="2025-11-06T00:23:19.039083079Z" level=info msg="Container e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:19.057000 containerd[1558]: time="2025-11-06T00:23:19.056915121Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4\"" Nov 6 00:23:19.057716 containerd[1558]: time="2025-11-06T00:23:19.057679764Z" level=info msg="StartContainer for \"e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4\"" Nov 6 00:23:19.062198 containerd[1558]: time="2025-11-06T00:23:19.062146724Z" level=info msg="connecting to shim e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4" address="unix:///run/containerd/s/c2e9b6dad8a8fd490c746684a9ccd8a10c5d877153d1c9735a9c7d9915ae2149" protocol=ttrpc version=3 Nov 6 00:23:19.108966 systemd[1]: Started cri-containerd-e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4.scope - libcontainer container e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4. Nov 6 00:23:19.160900 containerd[1558]: time="2025-11-06T00:23:19.160851148Z" level=info msg="StartContainer for \"e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4\" returns successfully" Nov 6 00:23:19.174818 systemd[1]: cri-containerd-e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4.scope: Deactivated successfully. Nov 6 00:23:19.180198 containerd[1558]: time="2025-11-06T00:23:19.180148698Z" level=info msg="received exit event container_id:\"e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4\" id:\"e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4\" pid:4529 exited_at:{seconds:1762388599 nanos:179633115}" Nov 6 00:23:19.180766 containerd[1558]: time="2025-11-06T00:23:19.180613843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4\" id:\"e508ead51bdcad7b8a71119b6abf104dc2b73a32dfb588d1b0440ed596af82f4\" pid:4529 exited_at:{seconds:1762388599 nanos:179633115}" Nov 6 00:23:19.216473 kubelet[2686]: E1106 00:23:19.216320 2686 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 00:23:19.596097 kubelet[2686]: E1106 00:23:19.595520 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:19.600835 containerd[1558]: time="2025-11-06T00:23:19.600779768Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 00:23:19.615063 containerd[1558]: time="2025-11-06T00:23:19.613624722Z" level=info msg="Container dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:19.623378 containerd[1558]: time="2025-11-06T00:23:19.623309997Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8\"" Nov 6 00:23:19.625051 containerd[1558]: time="2025-11-06T00:23:19.624969080Z" level=info msg="StartContainer for \"dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8\"" Nov 6 00:23:19.626430 containerd[1558]: time="2025-11-06T00:23:19.626349127Z" level=info msg="connecting to shim dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8" address="unix:///run/containerd/s/c2e9b6dad8a8fd490c746684a9ccd8a10c5d877153d1c9735a9c7d9915ae2149" protocol=ttrpc version=3 Nov 6 00:23:19.653135 systemd[1]: Started cri-containerd-dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8.scope - libcontainer container dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8. Nov 6 00:23:19.713285 containerd[1558]: time="2025-11-06T00:23:19.712353678Z" level=info msg="StartContainer for \"dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8\" returns successfully" Nov 6 00:23:19.719253 systemd[1]: cri-containerd-dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8.scope: Deactivated successfully. Nov 6 00:23:19.722875 containerd[1558]: time="2025-11-06T00:23:19.722816444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8\" id:\"dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8\" pid:4573 exited_at:{seconds:1762388599 nanos:721736167}" Nov 6 00:23:19.723179 containerd[1558]: time="2025-11-06T00:23:19.723147227Z" level=info msg="received exit event container_id:\"dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8\" id:\"dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8\" pid:4573 exited_at:{seconds:1762388599 nanos:721736167}" Nov 6 00:23:19.751430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd8a6d8953c071cf65ae5aade5946ef9fb3c26ac29508e9e82f59d7ce46622b8-rootfs.mount: Deactivated successfully. Nov 6 00:23:20.600973 kubelet[2686]: E1106 00:23:20.600872 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:20.608216 containerd[1558]: time="2025-11-06T00:23:20.607772070Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 00:23:20.626641 containerd[1558]: time="2025-11-06T00:23:20.623785130Z" level=info msg="Container 9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:20.629944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449550533.mount: Deactivated successfully. Nov 6 00:23:20.646435 containerd[1558]: time="2025-11-06T00:23:20.646237288Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399\"" Nov 6 00:23:20.647203 containerd[1558]: time="2025-11-06T00:23:20.647161421Z" level=info msg="StartContainer for \"9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399\"" Nov 6 00:23:20.651326 containerd[1558]: time="2025-11-06T00:23:20.650738790Z" level=info msg="connecting to shim 9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399" address="unix:///run/containerd/s/c2e9b6dad8a8fd490c746684a9ccd8a10c5d877153d1c9735a9c7d9915ae2149" protocol=ttrpc version=3 Nov 6 00:23:20.704873 systemd[1]: Started cri-containerd-9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399.scope - libcontainer container 9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399. Nov 6 00:23:20.795469 containerd[1558]: time="2025-11-06T00:23:20.795308448Z" level=info msg="StartContainer for \"9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399\" returns successfully" Nov 6 00:23:20.798951 systemd[1]: cri-containerd-9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399.scope: Deactivated successfully. Nov 6 00:23:20.801911 containerd[1558]: time="2025-11-06T00:23:20.801696276Z" level=info msg="received exit event container_id:\"9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399\" id:\"9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399\" pid:4617 exited_at:{seconds:1762388600 nanos:801317051}" Nov 6 00:23:20.803113 containerd[1558]: time="2025-11-06T00:23:20.803032993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399\" id:\"9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399\" pid:4617 exited_at:{seconds:1762388600 nanos:801317051}" Nov 6 00:23:20.846215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9496d14264305f35e7dd4941a03a2ba3b62aba206b63ec8cdd3daf3ad53ae399-rootfs.mount: Deactivated successfully. Nov 6 00:23:21.623031 kubelet[2686]: E1106 00:23:21.622987 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:21.630275 containerd[1558]: time="2025-11-06T00:23:21.630186195Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 00:23:21.660626 containerd[1558]: time="2025-11-06T00:23:21.660575051Z" level=info msg="Container 7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:21.672845 containerd[1558]: time="2025-11-06T00:23:21.672712263Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5\"" Nov 6 00:23:21.674584 containerd[1558]: time="2025-11-06T00:23:21.674376518Z" level=info msg="StartContainer for \"7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5\"" Nov 6 00:23:21.676244 containerd[1558]: time="2025-11-06T00:23:21.676178634Z" level=info msg="connecting to shim 7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5" address="unix:///run/containerd/s/c2e9b6dad8a8fd490c746684a9ccd8a10c5d877153d1c9735a9c7d9915ae2149" protocol=ttrpc version=3 Nov 6 00:23:21.713916 systemd[1]: Started cri-containerd-7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5.scope - libcontainer container 7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5. Nov 6 00:23:21.755358 systemd[1]: cri-containerd-7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5.scope: Deactivated successfully. Nov 6 00:23:21.758131 containerd[1558]: time="2025-11-06T00:23:21.758076195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5\" id:\"7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5\" pid:4658 exited_at:{seconds:1762388601 nanos:756414493}" Nov 6 00:23:21.760299 containerd[1558]: time="2025-11-06T00:23:21.760228749Z" level=info msg="received exit event container_id:\"7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5\" id:\"7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5\" pid:4658 exited_at:{seconds:1762388601 nanos:756414493}" Nov 6 00:23:21.773045 containerd[1558]: time="2025-11-06T00:23:21.772978924Z" level=info msg="StartContainer for \"7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5\" returns successfully" Nov 6 00:23:21.792467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b108fc500a5718257380636bddef55cdb250a72d2eb1cb2c9fee8b5015278c5-rootfs.mount: Deactivated successfully. Nov 6 00:23:22.669002 kubelet[2686]: E1106 00:23:22.668915 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:22.673609 containerd[1558]: time="2025-11-06T00:23:22.673362398Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 00:23:22.700224 containerd[1558]: time="2025-11-06T00:23:22.700167505Z" level=info msg="Container e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:22.703060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431921204.mount: Deactivated successfully. Nov 6 00:23:22.715949 containerd[1558]: time="2025-11-06T00:23:22.715751517Z" level=info msg="CreateContainer within sandbox \"1b0584e2c33ccee01ade81038f89a83aeee16262939ed1b064203023b84d0436\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\"" Nov 6 00:23:22.717003 containerd[1558]: time="2025-11-06T00:23:22.716857485Z" level=info msg="StartContainer for \"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\"" Nov 6 00:23:22.719419 containerd[1558]: time="2025-11-06T00:23:22.719382227Z" level=info msg="connecting to shim e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c" address="unix:///run/containerd/s/c2e9b6dad8a8fd490c746684a9ccd8a10c5d877153d1c9735a9c7d9915ae2149" protocol=ttrpc version=3 Nov 6 00:23:22.750888 systemd[1]: Started cri-containerd-e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c.scope - libcontainer container e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c. Nov 6 00:23:22.808234 containerd[1558]: time="2025-11-06T00:23:22.808172110Z" level=info msg="StartContainer for \"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\" returns successfully" Nov 6 00:23:22.913250 containerd[1558]: time="2025-11-06T00:23:22.913203153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\" id:\"0007eed49a9ee7292141e85b84ae642afeb9051abd7e9c2df40b5701e6f07682\" pid:4726 exited_at:{seconds:1762388602 nanos:912713369}" Nov 6 00:23:23.356625 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 6 00:23:23.681715 kubelet[2686]: E1106 00:23:23.680985 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:24.864185 kubelet[2686]: E1106 00:23:24.863029 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:25.513599 containerd[1558]: time="2025-11-06T00:23:25.511592851Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\" id:\"9db3ae368283d35c3ca0c6c337300a6eed442ddfd84c3ff2e6c13f9e69f35368\" pid:4926 exit_status:1 exited_at:{seconds:1762388605 nanos:511008816}" Nov 6 00:23:27.003370 systemd-networkd[1436]: lxc_health: Link UP Nov 6 00:23:27.010369 systemd-networkd[1436]: lxc_health: Gained carrier Nov 6 00:23:27.743595 containerd[1558]: time="2025-11-06T00:23:27.743309275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\" id:\"25aed7997e31afc9a98b854eb6e0c8f6554ee714d45e651307dd121adea7d526\" pid:5282 exited_at:{seconds:1762388607 nanos:742916454}" Nov 6 00:23:27.750602 kubelet[2686]: E1106 00:23:27.750404 2686 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:54228->127.0.0.1:38353: read tcp 127.0.0.1:54228->127.0.0.1:38353: read: connection reset by peer Nov 6 00:23:27.751686 kubelet[2686]: E1106 00:23:27.751313 2686 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54228->127.0.0.1:38353: write tcp 127.0.0.1:54228->127.0.0.1:38353: write: broken pipe Nov 6 00:23:28.864129 kubelet[2686]: E1106 00:23:28.864065 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:28.892582 kubelet[2686]: I1106 00:23:28.892470 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s6k5m" podStartSLOduration=10.892451641 podStartE2EDuration="10.892451641s" podCreationTimestamp="2025-11-06 00:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:23.704321246 +0000 UTC m=+109.935286379" watchObservedRunningTime="2025-11-06 00:23:28.892451641 +0000 UTC m=+115.123416766" Nov 6 00:23:28.980756 systemd-networkd[1436]: lxc_health: Gained IPv6LL Nov 6 00:23:29.700478 kubelet[2686]: E1106 00:23:29.700418 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:30.345136 containerd[1558]: time="2025-11-06T00:23:30.343375134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\" id:\"0a1abe6f16fbe778d5aa5138b46b7a792d283aaf0fb11b18a14e69617a53d25e\" pid:5311 exited_at:{seconds:1762388610 nanos:342044583}" Nov 6 00:23:30.703215 kubelet[2686]: E1106 00:23:30.703143 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 6 00:23:32.541628 containerd[1558]: time="2025-11-06T00:23:32.541331963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\" id:\"81eb62d3156a2c261a4cb3fa10d095802f7c59d1f8aff613d5ae690fdaf2f390\" pid:5336 exited_at:{seconds:1762388612 nanos:540861022}" Nov 6 00:23:33.993660 containerd[1558]: time="2025-11-06T00:23:33.993545066Z" level=info msg="StopPodSandbox for \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\"" Nov 6 00:23:33.994848 containerd[1558]: time="2025-11-06T00:23:33.994422443Z" level=info msg="TearDown network for sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" successfully" Nov 6 00:23:33.994848 containerd[1558]: time="2025-11-06T00:23:33.994468817Z" level=info msg="StopPodSandbox for \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" returns successfully" Nov 6 00:23:33.996605 containerd[1558]: time="2025-11-06T00:23:33.995502194Z" level=info msg="RemovePodSandbox for \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\"" Nov 6 00:23:33.996605 containerd[1558]: time="2025-11-06T00:23:33.995548893Z" level=info msg="Forcibly stopping sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\"" Nov 6 00:23:33.996605 containerd[1558]: time="2025-11-06T00:23:33.995684078Z" level=info msg="TearDown network for sandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" successfully" Nov 6 00:23:33.997928 containerd[1558]: time="2025-11-06T00:23:33.997861923Z" level=info msg="Ensure that sandbox a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040 in task-service has been cleanup successfully" Nov 6 00:23:34.002938 containerd[1558]: time="2025-11-06T00:23:34.002845903Z" level=info msg="RemovePodSandbox \"a5f3552362a8ce3c4443827cdf5915b506b77615b9556f7dc9f04db604808040\" returns successfully" Nov 6 00:23:34.004257 containerd[1558]: time="2025-11-06T00:23:34.004211294Z" level=info msg="StopPodSandbox for \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\"" Nov 6 00:23:34.004829 containerd[1558]: time="2025-11-06T00:23:34.004797455Z" level=info msg="TearDown network for sandbox \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" successfully" Nov 6 00:23:34.004971 containerd[1558]: time="2025-11-06T00:23:34.004952432Z" level=info msg="StopPodSandbox for \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" returns successfully" Nov 6 00:23:34.007273 containerd[1558]: time="2025-11-06T00:23:34.005687160Z" level=info msg="RemovePodSandbox for \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\"" Nov 6 00:23:34.007273 containerd[1558]: time="2025-11-06T00:23:34.005726022Z" level=info msg="Forcibly stopping sandbox \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\"" Nov 6 00:23:34.007273 containerd[1558]: time="2025-11-06T00:23:34.005847606Z" level=info msg="TearDown network for sandbox \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" successfully" Nov 6 00:23:34.008199 containerd[1558]: time="2025-11-06T00:23:34.008149168Z" level=info msg="Ensure that sandbox fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92 in task-service has been cleanup successfully" Nov 6 00:23:34.012484 containerd[1558]: time="2025-11-06T00:23:34.012404293Z" level=info msg="RemovePodSandbox \"fc127ad15423d1be5fcaf1eb56935c42e2d4f46aaca8571f81dfd460ba034e92\" returns successfully" Nov 6 00:23:34.711337 containerd[1558]: time="2025-11-06T00:23:34.711279213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e3350d2d11599b6bd1397d1142d26337820c18e5c96f0b318f43409bead0f31c\" id:\"10768fb49602757bc9f9857b966aeaec735503b57d93db536dd88a5f1a33a291\" pid:5380 exited_at:{seconds:1762388614 nanos:710914972}" Nov 6 00:23:34.724866 sshd[4463]: Connection closed by 139.178.68.195 port 36756 Nov 6 00:23:34.725807 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:34.732036 systemd-logind[1528]: Session 28 logged out. Waiting for processes to exit. Nov 6 00:23:34.732985 systemd[1]: sshd@27-147.182.203.34:22-139.178.68.195:36756.service: Deactivated successfully. Nov 6 00:23:34.737550 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 00:23:34.741164 systemd-logind[1528]: Removed session 28.