Dec 12 18:38:17.934416 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:38:17.934445 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:17.934458 kernel: BIOS-provided physical RAM map: Dec 12 18:38:17.934466 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 12 18:38:17.934472 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 12 18:38:17.934479 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:38:17.936529 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Dec 12 18:38:17.936551 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Dec 12 18:38:17.936558 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:38:17.936565 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:38:17.936572 kernel: NX (Execute Disable) protection: active Dec 12 18:38:17.936584 kernel: APIC: Static calls initialized Dec 12 18:38:17.936591 kernel: SMBIOS 2.8 present. Dec 12 18:38:17.936598 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Dec 12 18:38:17.936606 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:38:17.936613 kernel: Hypervisor detected: KVM Dec 12 18:38:17.936627 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 12 18:38:17.936634 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:38:17.936642 kernel: kvm-clock: using sched offset of 5521285291 cycles Dec 12 18:38:17.936650 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:38:17.936658 kernel: tsc: Detected 1995.312 MHz processor Dec 12 18:38:17.936665 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:38:17.936674 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:38:17.936681 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Dec 12 18:38:17.936689 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:38:17.936697 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:38:17.936707 kernel: ACPI: Early table checksum verification disabled Dec 12 18:38:17.936714 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Dec 12 18:38:17.936726 kernel: ACPI: RSDT 0x000000007FFE19FD 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:17.936733 kernel: ACPI: FACP 0x000000007FFE17E1 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:17.936740 kernel: ACPI: DSDT 0x000000007FFE0040 0017A1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:17.936748 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:38:17.936755 kernel: ACPI: APIC 0x000000007FFE1855 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:17.936762 kernel: ACPI: HPET 0x000000007FFE18D5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:17.936772 kernel: ACPI: SRAT 0x000000007FFE190D 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:17.936779 kernel: ACPI: WAET 0x000000007FFE19D5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:38:17.936787 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe17e1-0x7ffe1854] Dec 12 18:38:17.936794 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe17e0] Dec 12 18:38:17.936802 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:38:17.936809 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe1855-0x7ffe18d4] Dec 12 18:38:17.936823 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe18d5-0x7ffe190c] Dec 12 18:38:17.936839 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe190d-0x7ffe19d4] Dec 12 18:38:17.936850 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe19d5-0x7ffe19fc] Dec 12 18:38:17.936858 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 12 18:38:17.936866 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 12 18:38:17.936874 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Dec 12 18:38:17.936882 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Dec 12 18:38:17.936890 kernel: Zone ranges: Dec 12 18:38:17.936897 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:38:17.936907 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Dec 12 18:38:17.936915 kernel: Normal empty Dec 12 18:38:17.936923 kernel: Device empty Dec 12 18:38:17.936930 kernel: Movable zone start for each node Dec 12 18:38:17.936938 kernel: Early memory node ranges Dec 12 18:38:17.936945 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:38:17.936953 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Dec 12 18:38:17.936960 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Dec 12 18:38:17.936968 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:38:17.936978 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:38:17.936986 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Dec 12 18:38:17.936994 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:38:17.937006 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:38:17.937014 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:38:17.937025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:38:17.937032 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:38:17.937040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:38:17.937051 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:38:17.937061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:38:17.937069 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:38:17.937077 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:38:17.937084 kernel: TSC deadline timer available Dec 12 18:38:17.937092 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:38:17.937099 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:38:17.937107 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:38:17.937115 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:38:17.937123 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:38:17.937130 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:38:17.937141 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:38:17.937149 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:38:17.937156 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 12 18:38:17.937164 kernel: Booting paravirtualized kernel on KVM Dec 12 18:38:17.937172 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:38:17.937180 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:38:17.937188 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:38:17.937196 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:38:17.937203 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:38:17.937213 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 12 18:38:17.937223 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:17.937233 kernel: random: crng init done Dec 12 18:38:17.937245 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:38:17.937260 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 18:38:17.937271 kernel: Fallback order for Node 0: 0 Dec 12 18:38:17.937286 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Dec 12 18:38:17.937303 kernel: Policy zone: DMA32 Dec 12 18:38:17.937329 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:38:17.937350 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:38:17.937366 kernel: Kernel/User page tables isolation: enabled Dec 12 18:38:17.937386 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:38:17.937407 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:38:17.937428 kernel: Dynamic Preempt: voluntary Dec 12 18:38:17.937448 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:38:17.937471 kernel: rcu: RCU event tracing is enabled. Dec 12 18:38:17.937517 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:38:17.937542 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:38:17.937557 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:38:17.937572 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:38:17.937589 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:38:17.937604 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:38:17.937618 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:17.937643 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:17.937659 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:38:17.937674 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:38:17.937697 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:38:17.937713 kernel: Console: colour VGA+ 80x25 Dec 12 18:38:17.937725 kernel: printk: legacy console [tty0] enabled Dec 12 18:38:17.937737 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:38:17.937749 kernel: ACPI: Core revision 20240827 Dec 12 18:38:17.937761 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:38:17.937783 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:38:17.937794 kernel: x2apic enabled Dec 12 18:38:17.937803 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:38:17.937812 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:38:17.937820 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Dec 12 18:38:17.937834 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Dec 12 18:38:17.937846 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 12 18:38:17.937854 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 12 18:38:17.937863 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:38:17.937871 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:38:17.937882 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:38:17.937891 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:38:17.937899 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:38:17.937908 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:38:17.937916 kernel: MDS: Mitigation: Clear CPU buffers Dec 12 18:38:17.937925 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:38:17.937933 kernel: active return thunk: its_return_thunk Dec 12 18:38:17.937942 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 12 18:38:17.937950 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:38:17.937961 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:38:17.937970 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:38:17.937978 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:38:17.937987 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 12 18:38:17.937996 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:38:17.938008 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:38:17.938022 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:38:17.938035 kernel: landlock: Up and running. Dec 12 18:38:17.938043 kernel: SELinux: Initializing. Dec 12 18:38:17.938054 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:38:17.938063 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 12 18:38:17.938071 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Dec 12 18:38:17.938079 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Dec 12 18:38:17.938088 kernel: signal: max sigframe size: 1776 Dec 12 18:38:17.938096 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:38:17.939536 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:38:17.939553 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:38:17.939562 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 12 18:38:17.939575 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:38:17.939590 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:38:17.939599 kernel: .... node #0, CPUs: #1 Dec 12 18:38:17.939608 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:38:17.939616 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Dec 12 18:38:17.939626 kernel: Memory: 1958716K/2096612K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 133332K reserved, 0K cma-reserved) Dec 12 18:38:17.939634 kernel: devtmpfs: initialized Dec 12 18:38:17.939643 kernel: x86/mm: Memory block size: 128MB Dec 12 18:38:17.939652 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:38:17.939663 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:38:17.939672 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:38:17.939680 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:38:17.939689 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:38:17.939698 kernel: audit: type=2000 audit(1765564693.088:1): state=initialized audit_enabled=0 res=1 Dec 12 18:38:17.939706 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:38:17.939715 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:38:17.939723 kernel: cpuidle: using governor menu Dec 12 18:38:17.939731 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:38:17.939742 kernel: dca service started, version 1.12.1 Dec 12 18:38:17.939751 kernel: PCI: Using configuration type 1 for base access Dec 12 18:38:17.939759 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:38:17.939768 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:38:17.939776 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:38:17.939785 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:38:17.939793 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:38:17.939802 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:38:17.939810 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:38:17.939821 kernel: ACPI: Interpreter enabled Dec 12 18:38:17.939830 kernel: ACPI: PM: (supports S0 S5) Dec 12 18:38:17.939838 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:38:17.939847 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:38:17.939855 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:38:17.939864 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 12 18:38:17.939872 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:38:17.940096 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:38:17.940229 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 12 18:38:17.940335 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 12 18:38:17.940346 kernel: acpiphp: Slot [3] registered Dec 12 18:38:17.940355 kernel: acpiphp: Slot [4] registered Dec 12 18:38:17.940364 kernel: acpiphp: Slot [5] registered Dec 12 18:38:17.940372 kernel: acpiphp: Slot [6] registered Dec 12 18:38:17.940380 kernel: acpiphp: Slot [7] registered Dec 12 18:38:17.940388 kernel: acpiphp: Slot [8] registered Dec 12 18:38:17.940401 kernel: acpiphp: Slot [9] registered Dec 12 18:38:17.940410 kernel: acpiphp: Slot [10] registered Dec 12 18:38:17.940418 kernel: acpiphp: Slot [11] registered Dec 12 18:38:17.940426 kernel: acpiphp: Slot [12] registered Dec 12 18:38:17.940435 kernel: acpiphp: Slot [13] registered Dec 12 18:38:17.940443 kernel: acpiphp: Slot [14] registered Dec 12 18:38:17.940451 kernel: acpiphp: Slot [15] registered Dec 12 18:38:17.940459 kernel: acpiphp: Slot [16] registered Dec 12 18:38:17.940468 kernel: acpiphp: Slot [17] registered Dec 12 18:38:17.940476 kernel: acpiphp: Slot [18] registered Dec 12 18:38:17.941534 kernel: acpiphp: Slot [19] registered Dec 12 18:38:17.941551 kernel: acpiphp: Slot [20] registered Dec 12 18:38:17.941560 kernel: acpiphp: Slot [21] registered Dec 12 18:38:17.941569 kernel: acpiphp: Slot [22] registered Dec 12 18:38:17.941578 kernel: acpiphp: Slot [23] registered Dec 12 18:38:17.941586 kernel: acpiphp: Slot [24] registered Dec 12 18:38:17.941595 kernel: acpiphp: Slot [25] registered Dec 12 18:38:17.941603 kernel: acpiphp: Slot [26] registered Dec 12 18:38:17.941612 kernel: acpiphp: Slot [27] registered Dec 12 18:38:17.941626 kernel: acpiphp: Slot [28] registered Dec 12 18:38:17.941634 kernel: acpiphp: Slot [29] registered Dec 12 18:38:17.941643 kernel: acpiphp: Slot [30] registered Dec 12 18:38:17.941651 kernel: acpiphp: Slot [31] registered Dec 12 18:38:17.941660 kernel: PCI host bridge to bus 0000:00 Dec 12 18:38:17.941825 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:38:17.941928 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:38:17.942012 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:38:17.942100 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 12 18:38:17.942181 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 12 18:38:17.942284 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:38:17.942416 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:38:17.944597 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:38:17.944721 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Dec 12 18:38:17.944823 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Dec 12 18:38:17.944916 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Dec 12 18:38:17.945006 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Dec 12 18:38:17.945097 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Dec 12 18:38:17.945186 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Dec 12 18:38:17.945284 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Dec 12 18:38:17.945376 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Dec 12 18:38:17.945482 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Dec 12 18:38:17.945601 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 12 18:38:17.945692 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 12 18:38:17.945802 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:38:17.945912 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Dec 12 18:38:17.946004 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Dec 12 18:38:17.946107 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Dec 12 18:38:17.946222 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Dec 12 18:38:17.946342 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:38:17.946478 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:38:17.948677 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Dec 12 18:38:17.948792 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Dec 12 18:38:17.948915 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Dec 12 18:38:17.949074 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:38:17.949178 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Dec 12 18:38:17.949311 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Dec 12 18:38:17.949449 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 12 18:38:17.949587 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:38:17.949681 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Dec 12 18:38:17.949771 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Dec 12 18:38:17.949879 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 12 18:38:17.949985 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:38:17.950076 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Dec 12 18:38:17.950167 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Dec 12 18:38:17.950283 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Dec 12 18:38:17.950397 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:38:17.952467 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Dec 12 18:38:17.952634 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Dec 12 18:38:17.952727 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Dec 12 18:38:17.952840 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:38:17.952934 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Dec 12 18:38:17.953025 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Dec 12 18:38:17.953036 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:38:17.953049 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:38:17.953057 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:38:17.953066 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:38:17.953074 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 12 18:38:17.953083 kernel: iommu: Default domain type: Translated Dec 12 18:38:17.953091 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:38:17.953100 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:38:17.953109 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:38:17.953118 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 12 18:38:17.953129 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Dec 12 18:38:17.953251 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 12 18:38:17.953354 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 12 18:38:17.953445 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:38:17.953456 kernel: vgaarb: loaded Dec 12 18:38:17.953465 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:38:17.953473 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:38:17.953482 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:38:17.953504 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:38:17.953516 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:38:17.953525 kernel: pnp: PnP ACPI init Dec 12 18:38:17.953533 kernel: pnp: PnP ACPI: found 4 devices Dec 12 18:38:17.953542 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:38:17.953597 kernel: NET: Registered PF_INET protocol family Dec 12 18:38:17.953610 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:38:17.953622 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 12 18:38:17.953634 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:38:17.953647 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 12 18:38:17.953663 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 12 18:38:17.953675 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 12 18:38:17.953688 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:38:17.953700 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 12 18:38:17.953711 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:38:17.953723 kernel: NET: Registered PF_XDP protocol family Dec 12 18:38:17.953838 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:38:17.953927 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:38:17.954042 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:38:17.954162 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 12 18:38:17.954266 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 12 18:38:17.954367 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 12 18:38:17.954478 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 12 18:38:17.957094 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 12 18:38:17.957290 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 30240 usecs Dec 12 18:38:17.957311 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:38:17.957322 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 12 18:38:17.957342 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Dec 12 18:38:17.957352 kernel: Initialise system trusted keyrings Dec 12 18:38:17.957366 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 12 18:38:17.957375 kernel: Key type asymmetric registered Dec 12 18:38:17.957388 kernel: Asymmetric key parser 'x509' registered Dec 12 18:38:17.957398 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:38:17.957410 kernel: io scheduler mq-deadline registered Dec 12 18:38:17.957423 kernel: io scheduler kyber registered Dec 12 18:38:17.957435 kernel: io scheduler bfq registered Dec 12 18:38:17.957450 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:38:17.957459 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 12 18:38:17.957471 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 12 18:38:17.957483 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 12 18:38:17.957516 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:38:17.957542 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:38:17.957557 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:38:17.957568 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:38:17.957581 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:38:17.957589 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:38:17.957756 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:38:17.957869 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:38:17.957979 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:38:17 UTC (1765564697) Dec 12 18:38:17.958065 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:38:17.958076 kernel: intel_pstate: CPU model not supported Dec 12 18:38:17.958086 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:38:17.958098 kernel: Segment Routing with IPv6 Dec 12 18:38:17.958107 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:38:17.958115 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:38:17.958124 kernel: Key type dns_resolver registered Dec 12 18:38:17.958133 kernel: IPI shorthand broadcast: enabled Dec 12 18:38:17.958149 kernel: sched_clock: Marking stable (4024003656, 240779135)->(4318158923, -53376132) Dec 12 18:38:17.958158 kernel: registered taskstats version 1 Dec 12 18:38:17.958170 kernel: Loading compiled-in X.509 certificates Dec 12 18:38:17.958183 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:38:17.958194 kernel: Demotion targets for Node 0: null Dec 12 18:38:17.958208 kernel: Key type .fscrypt registered Dec 12 18:38:17.958219 kernel: Key type fscrypt-provisioning registered Dec 12 18:38:17.958312 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:38:17.958330 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:38:17.958340 kernel: ima: No architecture policies found Dec 12 18:38:17.958356 kernel: clk: Disabling unused clocks Dec 12 18:38:17.958366 kernel: Warning: unable to open an initial console. Dec 12 18:38:17.958378 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:38:17.958393 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:38:17.958402 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:38:17.958413 kernel: Run /init as init process Dec 12 18:38:17.958426 kernel: with arguments: Dec 12 18:38:17.958436 kernel: /init Dec 12 18:38:17.958445 kernel: with environment: Dec 12 18:38:17.958458 kernel: HOME=/ Dec 12 18:38:17.958473 kernel: TERM=linux Dec 12 18:38:17.959600 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:38:17.959640 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:38:17.959652 systemd[1]: Detected virtualization kvm. Dec 12 18:38:17.959667 systemd[1]: Detected architecture x86-64. Dec 12 18:38:17.959678 systemd[1]: Running in initrd. Dec 12 18:38:17.959688 systemd[1]: No hostname configured, using default hostname. Dec 12 18:38:17.959704 systemd[1]: Hostname set to . Dec 12 18:38:17.959720 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:38:17.959739 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:38:17.959752 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:17.959767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:17.959778 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:38:17.959794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:38:17.959805 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:38:17.959824 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:38:17.959840 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:38:17.959856 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:38:17.959867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:17.959882 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:17.959892 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:38:17.959906 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:38:17.959919 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:38:17.959932 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:38:17.959946 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:38:17.959956 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:38:17.959968 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:38:17.959982 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:38:17.959996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:17.960011 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:17.960029 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:17.960042 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:38:17.960052 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:38:17.960064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:38:17.960073 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:38:17.960082 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:38:17.960092 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:38:17.960101 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:38:17.960112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:38:17.960122 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:17.960179 systemd-journald[193]: Collecting audit messages is disabled. Dec 12 18:38:17.960209 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:38:17.960222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:17.960236 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:38:17.960252 systemd-journald[193]: Journal started Dec 12 18:38:17.960289 systemd-journald[193]: Runtime Journal (/run/log/journal/e7d75c24722f4450a6fb1a7eeb3ad379) is 4.9M, max 39.2M, 34.3M free. Dec 12 18:38:17.964512 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:38:17.965584 systemd-modules-load[194]: Inserted module 'overlay' Dec 12 18:38:17.975556 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:38:17.994165 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:38:18.086673 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:38:18.086711 kernel: Bridge firewalling registered Dec 12 18:38:18.016196 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 12 18:38:18.085690 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:18.090758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:18.092062 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:18.097689 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:38:18.102793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:38:18.104389 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:38:18.107778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:38:18.116336 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:18.128030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:18.135932 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:38:18.150350 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:18.153596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:38:18.157663 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:38:18.190922 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:38:18.198848 systemd-resolved[227]: Positive Trust Anchors: Dec 12 18:38:18.199654 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:38:18.199692 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:38:18.207991 systemd-resolved[227]: Defaulting to hostname 'linux'. Dec 12 18:38:18.210202 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:38:18.211983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:18.320579 kernel: SCSI subsystem initialized Dec 12 18:38:18.334561 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:38:18.348565 kernel: iscsi: registered transport (tcp) Dec 12 18:38:18.377931 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:38:18.378042 kernel: QLogic iSCSI HBA Driver Dec 12 18:38:18.412769 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:38:18.448731 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:18.453391 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:38:18.529951 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:38:18.533234 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:38:18.604575 kernel: raid6: avx2x4 gen() 18325 MB/s Dec 12 18:38:18.622558 kernel: raid6: avx2x2 gen() 21474 MB/s Dec 12 18:38:18.640591 kernel: raid6: avx2x1 gen() 14764 MB/s Dec 12 18:38:18.640722 kernel: raid6: using algorithm avx2x2 gen() 21474 MB/s Dec 12 18:38:18.660665 kernel: raid6: .... xor() 15549 MB/s, rmw enabled Dec 12 18:38:18.660816 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:38:18.695579 kernel: xor: automatically using best checksumming function avx Dec 12 18:38:18.909128 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:38:18.922145 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:38:18.925927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:18.964384 systemd-udevd[442]: Using default interface naming scheme 'v255'. Dec 12 18:38:18.974051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:18.979275 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:38:19.019807 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Dec 12 18:38:19.064043 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:38:19.068703 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:38:19.141768 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:19.146402 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:38:19.237614 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Dec 12 18:38:19.251847 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 12 18:38:19.272570 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Dec 12 18:38:19.276512 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:38:19.282169 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:38:19.282289 kernel: GPT:9289727 != 125829119 Dec 12 18:38:19.282310 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:38:19.282329 kernel: GPT:9289727 != 125829119 Dec 12 18:38:19.283852 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:38:19.290994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:38:19.298633 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Dec 12 18:38:19.303600 kernel: virtio_blk virtio5: [vdb] 964 512-byte logical blocks (494 kB/482 KiB) Dec 12 18:38:19.307011 kernel: AES CTR mode by8 optimization enabled Dec 12 18:38:19.334621 kernel: scsi host0: Virtio SCSI HBA Dec 12 18:38:19.336524 kernel: libata version 3.00 loaded. Dec 12 18:38:19.345211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:19.347824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:19.359520 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 12 18:38:19.356752 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:19.378215 kernel: scsi host1: ata_piix Dec 12 18:38:19.378474 kernel: scsi host2: ata_piix Dec 12 18:38:19.382845 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Dec 12 18:38:19.382870 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Dec 12 18:38:19.382882 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:38:19.368016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:19.386453 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:19.453573 kernel: ACPI: bus type USB registered Dec 12 18:38:19.460855 kernel: usbcore: registered new interface driver usbfs Dec 12 18:38:19.460950 kernel: usbcore: registered new interface driver hub Dec 12 18:38:19.460971 kernel: usbcore: registered new device driver usb Dec 12 18:38:19.478872 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:38:19.583028 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:19.600578 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Dec 12 18:38:19.601055 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Dec 12 18:38:19.603236 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Dec 12 18:38:19.605509 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Dec 12 18:38:19.608596 kernel: hub 1-0:1.0: USB hub found Dec 12 18:38:19.608953 kernel: hub 1-0:1.0: 2 ports detected Dec 12 18:38:19.608760 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:38:19.624589 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 18:38:19.627634 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:38:19.640177 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:38:19.657028 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:38:19.664072 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:38:19.665270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:19.667436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:38:19.670931 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:38:19.674726 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:38:19.700569 disk-uuid[598]: Primary Header is updated. Dec 12 18:38:19.700569 disk-uuid[598]: Secondary Entries is updated. Dec 12 18:38:19.700569 disk-uuid[598]: Secondary Header is updated. Dec 12 18:38:19.709305 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:38:19.713325 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:38:20.722628 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:38:20.723945 disk-uuid[601]: The operation has completed successfully. Dec 12 18:38:20.783559 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:38:20.783727 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:38:20.815228 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:38:20.832691 sh[617]: Success Dec 12 18:38:20.858698 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:38:20.858776 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:38:20.860641 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:38:20.874525 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Dec 12 18:38:20.937562 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:38:20.944184 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:38:20.949701 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:38:20.970530 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (629) Dec 12 18:38:20.976044 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:38:20.976141 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:20.984957 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:38:20.985048 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:38:20.989135 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:38:20.991367 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:38:20.992306 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:38:20.993307 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:38:20.997391 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:38:21.026567 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (660) Dec 12 18:38:21.031478 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:21.031585 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:21.040524 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:38:21.040620 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:38:21.047580 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:21.049765 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:38:21.052628 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:38:21.172923 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:38:21.182822 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:38:21.232606 systemd-networkd[798]: lo: Link UP Dec 12 18:38:21.232616 systemd-networkd[798]: lo: Gained carrier Dec 12 18:38:21.241141 systemd-networkd[798]: Enumeration completed Dec 12 18:38:21.241743 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:38:21.244071 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 12 18:38:21.244076 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Dec 12 18:38:21.255192 systemd[1]: Reached target network.target - Network. Dec 12 18:38:21.262290 systemd-networkd[798]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:38:21.262303 systemd-networkd[798]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:38:21.266037 systemd-networkd[798]: eth0: Link UP Dec 12 18:38:21.266303 systemd-networkd[798]: eth1: Link UP Dec 12 18:38:21.271553 systemd-networkd[798]: eth0: Gained carrier Dec 12 18:38:21.271577 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Dec 12 18:38:21.276705 systemd-networkd[798]: eth1: Gained carrier Dec 12 18:38:21.276729 systemd-networkd[798]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:38:21.288603 systemd-networkd[798]: eth0: DHCPv4 address 134.199.209.86/20, gateway 134.199.208.1 acquired from 169.254.169.253 Dec 12 18:38:21.296955 ignition[709]: Ignition 2.22.0 Dec 12 18:38:21.296975 ignition[709]: Stage: fetch-offline Dec 12 18:38:21.298657 systemd-networkd[798]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 Dec 12 18:38:21.297022 ignition[709]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:21.300752 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:38:21.297036 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:21.297154 ignition[709]: parsed url from cmdline: "" Dec 12 18:38:21.304716 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:38:21.297158 ignition[709]: no config URL provided Dec 12 18:38:21.297167 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:38:21.297176 ignition[709]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:38:21.297183 ignition[709]: failed to fetch config: resource requires networking Dec 12 18:38:21.297380 ignition[709]: Ignition finished successfully Dec 12 18:38:21.342903 ignition[808]: Ignition 2.22.0 Dec 12 18:38:21.342920 ignition[808]: Stage: fetch Dec 12 18:38:21.343101 ignition[808]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:21.343118 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:21.343228 ignition[808]: parsed url from cmdline: "" Dec 12 18:38:21.343233 ignition[808]: no config URL provided Dec 12 18:38:21.343239 ignition[808]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:38:21.343249 ignition[808]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:38:21.343280 ignition[808]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Dec 12 18:38:21.359739 ignition[808]: GET result: OK Dec 12 18:38:21.362425 ignition[808]: parsing config with SHA512: c60c8ea573e75965f328aa836e9d59ae8edc945199237359ee2e35901cd4940d13d2d2010d53c136e266f8c1872c879df73aeea23e8126640c68ca5097246a7f Dec 12 18:38:21.367157 unknown[808]: fetched base config from "system" Dec 12 18:38:21.367173 unknown[808]: fetched base config from "system" Dec 12 18:38:21.367971 ignition[808]: fetch: fetch complete Dec 12 18:38:21.367183 unknown[808]: fetched user config from "digitalocean" Dec 12 18:38:21.367977 ignition[808]: fetch: fetch passed Dec 12 18:38:21.372186 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:38:21.368035 ignition[808]: Ignition finished successfully Dec 12 18:38:21.375620 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:38:21.432146 ignition[815]: Ignition 2.22.0 Dec 12 18:38:21.432162 ignition[815]: Stage: kargs Dec 12 18:38:21.432310 ignition[815]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:21.432320 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:21.434800 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:38:21.433145 ignition[815]: kargs: kargs passed Dec 12 18:38:21.433197 ignition[815]: Ignition finished successfully Dec 12 18:38:21.438675 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:38:21.489243 ignition[822]: Ignition 2.22.0 Dec 12 18:38:21.489258 ignition[822]: Stage: disks Dec 12 18:38:21.489427 ignition[822]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:21.489439 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:21.492358 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:38:21.490729 ignition[822]: disks: disks passed Dec 12 18:38:21.494357 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:38:21.490993 ignition[822]: Ignition finished successfully Dec 12 18:38:21.495459 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:38:21.496878 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:38:21.498368 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:38:21.499774 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:38:21.503646 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:38:21.545527 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:38:21.550215 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:38:21.554624 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:38:21.704532 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:38:21.705394 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:38:21.707122 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:38:21.710689 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:38:21.713802 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:38:21.726718 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Dec 12 18:38:21.733623 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 18:38:21.737600 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Dec 12 18:38:21.739119 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:38:21.748658 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:21.748698 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:21.740156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:38:21.751412 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:38:21.759863 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:38:21.759925 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:38:21.763866 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:38:21.781677 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:38:21.817939 coreos-metadata[840]: Dec 12 18:38:21.817 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:38:21.830047 coreos-metadata[840]: Dec 12 18:38:21.829 INFO Fetch successful Dec 12 18:38:21.835394 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Dec 12 18:38:21.836578 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Dec 12 18:38:21.841091 coreos-metadata[841]: Dec 12 18:38:21.841 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:38:21.849535 initrd-setup-root[869]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:38:21.855305 coreos-metadata[841]: Dec 12 18:38:21.855 INFO Fetch successful Dec 12 18:38:21.858950 initrd-setup-root[876]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:38:21.865964 coreos-metadata[841]: Dec 12 18:38:21.865 INFO wrote hostname ci-4459.2.2-7-7f06ea9468 to /sysroot/etc/hostname Dec 12 18:38:21.867780 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:38:21.870382 initrd-setup-root[883]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:38:21.876309 initrd-setup-root[891]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:38:22.001935 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:38:22.005192 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:38:22.007678 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:38:22.035849 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:38:22.038164 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:22.056868 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:38:22.088049 ignition[959]: INFO : Ignition 2.22.0 Dec 12 18:38:22.088049 ignition[959]: INFO : Stage: mount Dec 12 18:38:22.089944 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:22.089944 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:22.089944 ignition[959]: INFO : mount: mount passed Dec 12 18:38:22.089944 ignition[959]: INFO : Ignition finished successfully Dec 12 18:38:22.091843 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:38:22.096136 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:38:22.115108 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:38:22.140545 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (970) Dec 12 18:38:22.144530 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:38:22.144603 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:38:22.153436 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:38:22.153554 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:38:22.156077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:38:22.193163 ignition[987]: INFO : Ignition 2.22.0 Dec 12 18:38:22.193163 ignition[987]: INFO : Stage: files Dec 12 18:38:22.195309 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:22.195309 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:22.195309 ignition[987]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:38:22.198898 ignition[987]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:38:22.198898 ignition[987]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:38:22.198898 ignition[987]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:38:22.198898 ignition[987]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:38:22.203625 ignition[987]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:38:22.203625 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:38:22.203625 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:38:22.199025 unknown[987]: wrote ssh authorized keys file for user: core Dec 12 18:38:22.314574 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:38:22.486528 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:38:22.486528 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:38:22.486528 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:38:22.486528 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:38:22.486528 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:38:22.486528 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:38:22.486528 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:38:22.486528 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:38:22.504754 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:38:22.504754 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:38:22.504754 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:38:22.504754 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:22.504754 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:22.504754 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:22.504754 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:38:22.519665 systemd-networkd[798]: eth1: Gained IPv6LL Dec 12 18:38:22.773235 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:38:23.288089 systemd-networkd[798]: eth0: Gained IPv6LL Dec 12 18:38:23.302303 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:38:23.302303 ignition[987]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:38:23.305407 ignition[987]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:38:23.307674 ignition[987]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:38:23.307674 ignition[987]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:38:23.307674 ignition[987]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:38:23.312224 ignition[987]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:38:23.312224 ignition[987]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:38:23.312224 ignition[987]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:38:23.312224 ignition[987]: INFO : files: files passed Dec 12 18:38:23.312224 ignition[987]: INFO : Ignition finished successfully Dec 12 18:38:23.309841 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:38:23.313677 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:38:23.318708 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:38:23.334901 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:38:23.336612 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:38:23.345933 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:23.345933 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:23.348694 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:38:23.350364 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:38:23.352316 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:38:23.355835 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:38:23.422329 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:38:23.422454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:38:23.424796 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:38:23.425965 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:38:23.427724 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:38:23.428749 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:38:23.472088 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:38:23.474857 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:38:23.502270 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:23.503360 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:23.505109 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:38:23.506666 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:38:23.506844 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:38:23.508561 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:38:23.509482 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:38:23.511060 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:38:23.512694 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:38:23.514417 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:38:23.515917 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:38:23.517521 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:38:23.519046 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:38:23.520767 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:38:23.522443 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:38:23.524309 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:38:23.525616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:38:23.525754 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:38:23.527567 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:23.528675 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:23.530297 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:38:23.530780 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:23.532028 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:38:23.532236 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:38:23.534167 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:38:23.534414 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:38:23.536037 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:38:23.536213 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:38:23.537651 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 18:38:23.537860 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 18:38:23.541650 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:38:23.547754 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:38:23.549328 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:38:23.549655 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:23.552322 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:38:23.554757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:38:23.560482 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:38:23.560640 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:38:23.591552 ignition[1041]: INFO : Ignition 2.22.0 Dec 12 18:38:23.592621 ignition[1041]: INFO : Stage: umount Dec 12 18:38:23.594581 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:38:23.594581 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Dec 12 18:38:23.596684 ignition[1041]: INFO : umount: umount passed Dec 12 18:38:23.596684 ignition[1041]: INFO : Ignition finished successfully Dec 12 18:38:23.599182 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:38:23.599934 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:38:23.600029 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:38:23.605023 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:38:23.605179 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:38:23.637724 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:38:23.637832 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:38:23.639398 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:38:23.639507 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:38:23.641148 systemd[1]: Stopped target network.target - Network. Dec 12 18:38:23.642679 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:38:23.642779 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:38:23.656877 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:38:23.657740 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:38:23.662036 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:23.663221 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:38:23.663887 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:38:23.665618 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:38:23.665675 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:38:23.667344 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:38:23.667391 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:38:23.668935 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:38:23.669019 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:38:23.670940 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:38:23.671020 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:38:23.673145 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:38:23.674997 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:38:23.677694 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:38:23.677862 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:38:23.682183 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:38:23.682348 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:38:23.684799 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:38:23.684973 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:38:23.691203 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:38:23.691990 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:38:23.692180 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:38:23.695270 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:38:23.697205 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:38:23.698483 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:38:23.698566 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:23.702672 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:38:23.706046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:38:23.706163 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:38:23.708025 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:38:23.708108 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:23.712731 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:38:23.712804 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:23.713862 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:38:23.713947 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:23.716441 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:23.722964 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:38:23.723080 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:23.734936 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:38:23.735226 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:23.738812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:38:23.738940 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:23.740438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:38:23.740517 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:23.742072 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:38:23.742157 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:38:23.744635 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:38:23.744709 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:38:23.746338 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:38:23.746420 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:38:23.749678 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:38:23.753018 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:38:23.753126 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:23.755153 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:38:23.755237 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:23.756985 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:38:23.757060 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:23.760455 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:38:23.760562 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:23.762368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:23.762428 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:23.765973 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:38:23.766043 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 18:38:23.766083 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:38:23.766128 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:23.766735 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:38:23.766875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:38:23.775296 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:38:23.775445 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:38:23.777157 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:38:23.779413 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:38:23.801898 systemd[1]: Switching root. Dec 12 18:38:23.873921 systemd-journald[193]: Journal stopped Dec 12 18:38:25.158334 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Dec 12 18:38:25.158420 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:38:25.158438 kernel: SELinux: policy capability open_perms=1 Dec 12 18:38:25.158460 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:38:25.158472 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:38:25.158504 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:38:25.158516 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:38:25.158536 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:38:25.158547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:38:25.158562 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:38:25.158580 kernel: audit: type=1403 audit(1765564704.031:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:38:25.158600 systemd[1]: Successfully loaded SELinux policy in 77.471ms. Dec 12 18:38:25.158631 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.992ms. Dec 12 18:38:25.158647 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:38:25.158660 systemd[1]: Detected virtualization kvm. Dec 12 18:38:25.158672 systemd[1]: Detected architecture x86-64. Dec 12 18:38:25.158684 systemd[1]: Detected first boot. Dec 12 18:38:25.158696 systemd[1]: Hostname set to . Dec 12 18:38:25.158708 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:38:25.158720 zram_generator::config[1085]: No configuration found. Dec 12 18:38:25.158732 kernel: Guest personality initialized and is inactive Dec 12 18:38:25.158750 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:38:25.158761 kernel: Initialized host personality Dec 12 18:38:25.158802 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:38:25.158815 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:38:25.158829 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:38:25.158841 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:38:25.158853 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:38:25.158864 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:38:25.158876 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:38:25.158894 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:38:25.158906 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:38:25.158918 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:38:25.158931 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:38:25.158942 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:38:25.158954 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:38:25.158966 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:38:25.158977 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:38:25.158989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:38:25.159003 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:38:25.159016 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:38:25.159029 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:38:25.159041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:38:25.159052 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:38:25.159066 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:38:25.159078 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:38:25.159090 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:38:25.159102 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:38:25.159113 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:38:25.159124 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:38:25.159136 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:38:25.159147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:38:25.159159 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:38:25.159172 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:38:25.159190 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:38:25.159201 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:38:25.159212 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:38:25.159224 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:38:25.159237 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:38:25.159255 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:38:25.159269 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:38:25.159282 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:38:25.159294 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:38:25.159308 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:38:25.159320 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:25.159351 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:38:25.159364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:38:25.159376 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:38:25.159388 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:38:25.159402 systemd[1]: Reached target machines.target - Containers. Dec 12 18:38:25.159414 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:38:25.159428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:25.159440 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:38:25.159452 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:38:25.159463 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:38:25.159475 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:38:25.159499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:38:25.160183 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:38:25.160201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:38:25.160216 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:38:25.160234 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:38:25.160247 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:38:25.160275 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:38:25.160289 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:38:25.160305 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:25.160321 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:38:25.160346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:38:25.160360 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:38:25.160373 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:38:25.160386 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:38:25.160399 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:38:25.160415 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:38:25.160428 systemd[1]: Stopped verity-setup.service. Dec 12 18:38:25.160442 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:25.160455 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:38:25.160482 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:38:25.160516 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:38:25.160527 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:38:25.160563 kernel: ACPI: bus type drm_connector registered Dec 12 18:38:25.160576 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:38:25.160588 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:38:25.160600 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:38:25.160612 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:38:25.160623 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:38:25.160672 systemd-journald[1164]: Collecting audit messages is disabled. Dec 12 18:38:25.160700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:38:25.160716 systemd-journald[1164]: Journal started Dec 12 18:38:25.160740 systemd-journald[1164]: Runtime Journal (/run/log/journal/e7d75c24722f4450a6fb1a7eeb3ad379) is 4.9M, max 39.2M, 34.3M free. Dec 12 18:38:24.718572 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:38:24.730587 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:38:24.731119 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:38:25.167530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:38:25.170525 kernel: fuse: init (API version 7.41) Dec 12 18:38:25.176527 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:38:25.181778 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:38:25.182372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:38:25.183868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:38:25.184404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:38:25.186168 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:38:25.186832 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:38:25.188993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:38:25.190682 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:38:25.192946 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:38:25.198646 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:38:25.200505 kernel: loop: module loaded Dec 12 18:38:25.202898 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:38:25.203752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:38:25.216906 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:38:25.221721 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:38:25.228628 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:38:25.231680 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:38:25.231741 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:38:25.237250 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:38:25.243062 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:38:25.243984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:25.248694 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:38:25.252722 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:38:25.253586 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:38:25.255441 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:38:25.257640 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:38:25.262793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:38:25.272100 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:38:25.278927 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:38:25.283541 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:38:25.287448 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:38:25.291346 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:38:25.309818 systemd-journald[1164]: Time spent on flushing to /var/log/journal/e7d75c24722f4450a6fb1a7eeb3ad379 is 102.462ms for 1013 entries. Dec 12 18:38:25.309818 systemd-journald[1164]: System Journal (/var/log/journal/e7d75c24722f4450a6fb1a7eeb3ad379) is 8M, max 195.6M, 187.6M free. Dec 12 18:38:25.464204 systemd-journald[1164]: Received client request to flush runtime journal. Dec 12 18:38:25.464273 kernel: loop0: detected capacity change from 0 to 8 Dec 12 18:38:25.464299 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:38:25.464318 kernel: loop1: detected capacity change from 0 to 224512 Dec 12 18:38:25.464333 kernel: loop2: detected capacity change from 0 to 110984 Dec 12 18:38:25.327517 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:38:25.330943 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:38:25.340652 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:38:25.348029 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:38:25.407712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:38:25.421101 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Dec 12 18:38:25.421117 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Dec 12 18:38:25.426198 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:38:25.431213 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:38:25.455562 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:38:25.465702 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:38:25.482527 kernel: loop3: detected capacity change from 0 to 128560 Dec 12 18:38:25.506615 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:38:25.510433 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:38:25.518870 kernel: loop4: detected capacity change from 0 to 8 Dec 12 18:38:25.518975 kernel: loop5: detected capacity change from 0 to 224512 Dec 12 18:38:25.575727 kernel: loop6: detected capacity change from 0 to 110984 Dec 12 18:38:25.615569 kernel: loop7: detected capacity change from 0 to 128560 Dec 12 18:38:25.621278 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Dec 12 18:38:25.621297 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Dec 12 18:38:25.627957 (sd-merge)[1234]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Dec 12 18:38:25.628478 (sd-merge)[1234]: Merged extensions into '/usr'. Dec 12 18:38:25.635234 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:38:25.656622 systemd[1]: Reload requested from client PID 1209 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:38:25.656789 systemd[1]: Reloading... Dec 12 18:38:25.763530 zram_generator::config[1258]: No configuration found. Dec 12 18:38:26.104481 ldconfig[1204]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:38:26.179012 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:38:26.179368 systemd[1]: Reloading finished in 522 ms. Dec 12 18:38:26.213223 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:38:26.217915 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:38:26.236172 systemd[1]: Starting ensure-sysext.service... Dec 12 18:38:26.242686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:38:26.276940 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:38:26.276966 systemd[1]: Reloading... Dec 12 18:38:26.300955 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:38:26.301312 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:38:26.303244 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:38:26.305792 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:38:26.307561 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:38:26.307916 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Dec 12 18:38:26.307998 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Dec 12 18:38:26.318363 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:38:26.319547 systemd-tmpfiles[1306]: Skipping /boot Dec 12 18:38:26.339348 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:38:26.339530 systemd-tmpfiles[1306]: Skipping /boot Dec 12 18:38:26.388976 zram_generator::config[1333]: No configuration found. Dec 12 18:38:26.612653 systemd[1]: Reloading finished in 335 ms. Dec 12 18:38:26.641925 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:38:26.649707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:38:26.659716 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:38:26.663829 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:38:26.667877 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:38:26.679762 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:38:26.686828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:38:26.690901 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:38:26.702118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:26.702971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:26.706396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:38:26.717646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:38:26.728371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:38:26.731601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:26.731832 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:26.731993 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:26.752244 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:38:26.758995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:26.759419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:26.760164 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:26.760363 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:26.760532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:26.773792 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:38:26.776080 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:38:26.779425 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:38:26.790451 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:38:26.793224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:38:26.795043 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:38:26.798813 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:38:26.799585 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:38:26.810217 systemd-udevd[1382]: Using default interface naming scheme 'v255'. Dec 12 18:38:26.810767 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:26.811074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:26.814561 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:38:26.815749 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:26.815793 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:26.815847 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:38:26.815900 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:38:26.820770 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:38:26.822747 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:38:26.822818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:26.823477 systemd[1]: Finished ensure-sysext.service. Dec 12 18:38:26.826278 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:38:26.847031 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:38:26.856627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:38:26.862760 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:38:26.875003 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:38:26.875216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:38:26.917018 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:38:26.921858 augenrules[1438]: No rules Dec 12 18:38:26.923916 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:38:26.924209 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:38:26.975844 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:38:27.065326 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Dec 12 18:38:27.071911 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Dec 12 18:38:27.073244 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:27.073437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:38:27.075905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:38:27.078951 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:38:27.084778 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:38:27.086255 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:38:27.086318 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:38:27.086362 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:38:27.086383 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:38:27.134695 kernel: ISO 9660 Extensions: RRIP_1991A Dec 12 18:38:27.146051 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Dec 12 18:38:27.151476 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:38:27.154515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:38:27.155896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:38:27.156153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:38:27.158270 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:38:27.158447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:38:27.159712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:38:27.160096 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:38:27.164446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:38:27.175076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:38:27.178166 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:38:27.217819 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:38:27.265573 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:38:27.319251 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 12 18:38:27.319643 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:38:27.350533 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:38:27.347735 systemd-networkd[1419]: lo: Link UP Dec 12 18:38:27.347740 systemd-networkd[1419]: lo: Gained carrier Dec 12 18:38:27.350305 systemd-networkd[1419]: Enumeration completed Dec 12 18:38:27.350465 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:38:27.351345 systemd-networkd[1419]: eth0: Configuring with /run/systemd/network/10-2e:88:97:d6:68:15.network. Dec 12 18:38:27.352185 systemd-networkd[1419]: eth1: Configuring with /run/systemd/network/10-6e:d2:d3:00:68:9d.network. Dec 12 18:38:27.352805 systemd-networkd[1419]: eth0: Link UP Dec 12 18:38:27.352978 systemd-networkd[1419]: eth0: Gained carrier Dec 12 18:38:27.353866 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:38:27.356847 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:38:27.363987 systemd-networkd[1419]: eth1: Link UP Dec 12 18:38:27.364868 systemd-networkd[1419]: eth1: Gained carrier Dec 12 18:38:27.369296 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:38:27.369459 systemd-resolved[1381]: Positive Trust Anchors: Dec 12 18:38:27.369476 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:38:27.369533 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:38:27.371612 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Dec 12 18:38:27.372052 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:38:27.387204 systemd-resolved[1381]: Using system hostname 'ci-4459.2.2-7-7f06ea9468'. Dec 12 18:38:27.393957 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:38:27.395973 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:38:27.398394 systemd[1]: Reached target network.target - Network. Dec 12 18:38:27.400601 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:38:27.401613 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:38:27.404714 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:38:27.405581 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:38:27.406348 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:38:27.407278 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:38:27.408288 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:38:27.409815 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:38:27.411956 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:38:27.412004 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:38:27.412744 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:38:27.415147 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:38:27.422868 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:38:27.426528 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:38:27.431872 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:38:27.433116 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:38:27.434752 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:38:27.444660 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:38:27.447056 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:38:27.448996 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:38:27.450620 systemd-timesyncd[1414]: Contacted time server 192.48.105.15:123 (0.flatcar.pool.ntp.org). Dec 12 18:38:27.450690 systemd-timesyncd[1414]: Initial clock synchronization to Fri 2025-12-12 18:38:27.206679 UTC. Dec 12 18:38:27.451465 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:38:27.452220 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:38:27.453635 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:38:27.453668 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:38:27.456664 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:38:27.461241 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:38:27.465827 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:38:27.470808 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:38:27.474718 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:38:27.482750 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:38:27.485617 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:38:27.493456 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:38:27.496615 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:38:27.498602 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:38:27.504706 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:38:27.510800 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:38:27.523187 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:38:27.526531 jq[1496]: false Dec 12 18:38:27.526917 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:38:27.528963 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:38:27.538806 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:38:27.545749 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:38:27.551088 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:38:27.553134 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:38:27.561793 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:38:27.565223 coreos-metadata[1491]: Dec 12 18:38:27.565 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:38:27.567541 extend-filesystems[1497]: Found /dev/vda6 Dec 12 18:38:27.575089 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing passwd entry cache Dec 12 18:38:27.572644 oslogin_cache_refresh[1499]: Refreshing passwd entry cache Dec 12 18:38:27.594957 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:38:27.595837 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting users, quitting Dec 12 18:38:27.595837 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:38:27.595825 oslogin_cache_refresh[1499]: Failure getting users, quitting Dec 12 18:38:27.596052 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Refreshing group entry cache Dec 12 18:38:27.595853 oslogin_cache_refresh[1499]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:38:27.595931 oslogin_cache_refresh[1499]: Refreshing group entry cache Dec 12 18:38:27.596715 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:38:27.597459 oslogin_cache_refresh[1499]: Failure getting groups, quitting Dec 12 18:38:27.601524 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Failure getting groups, quitting Dec 12 18:38:27.601524 google_oslogin_nss_cache[1499]: oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:38:27.597478 oslogin_cache_refresh[1499]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:38:27.605295 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:38:27.605720 jq[1513]: true Dec 12 18:38:27.605855 extend-filesystems[1497]: Found /dev/vda9 Dec 12 18:38:27.619168 coreos-metadata[1491]: Dec 12 18:38:27.610 INFO Fetch successful Dec 12 18:38:27.606743 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:38:27.630180 extend-filesystems[1497]: Checking size of /dev/vda9 Dec 12 18:38:27.640369 jq[1524]: true Dec 12 18:38:27.647744 dbus-daemon[1493]: [system] SELinux support is enabled Dec 12 18:38:27.647901 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:38:27.654747 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:38:27.655572 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:38:27.656423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:38:27.656512 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Dec 12 18:38:27.656527 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:38:27.666049 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:38:27.714570 update_engine[1512]: I20251212 18:38:27.713251 1512 main.cc:92] Flatcar Update Engine starting Dec 12 18:38:27.737262 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:38:27.739189 extend-filesystems[1497]: Resized partition /dev/vda9 Dec 12 18:38:27.741654 update_engine[1512]: I20251212 18:38:27.739014 1512 update_check_scheduler.cc:74] Next update check in 2m38s Dec 12 18:38:27.751247 extend-filesystems[1549]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:38:27.770889 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Dec 12 18:38:27.779270 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:38:27.781501 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:38:27.784328 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:38:27.785453 tar[1520]: linux-amd64/LICENSE Dec 12 18:38:27.785453 tar[1520]: linux-amd64/helm Dec 12 18:38:27.788866 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:38:27.791360 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:38:27.918689 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Dec 12 18:38:27.925147 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Dec 12 18:38:27.927334 bash[1567]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:38:27.932248 kernel: Console: switching to colour dummy device 80x25 Dec 12 18:38:27.939022 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 18:38:27.939162 kernel: [drm] features: -context_init Dec 12 18:38:27.939189 kernel: [drm] number of scanouts: 1 Dec 12 18:38:27.939209 kernel: [drm] number of cap sets: 0 Dec 12 18:38:27.943781 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Dec 12 18:38:27.950206 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 12 18:38:27.950355 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 18:38:27.960935 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 18:38:27.993397 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:38:28.003798 systemd[1]: Starting sshkeys.service... Dec 12 18:38:28.019522 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 12 18:38:28.040811 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:38:28.044644 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:38:28.073720 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:38:28.073720 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 12 18:38:28.073720 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 12 18:38:28.078223 extend-filesystems[1497]: Resized filesystem in /dev/vda9 Dec 12 18:38:28.075626 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:38:28.075913 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:38:28.215829 coreos-metadata[1579]: Dec 12 18:38:28.211 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Dec 12 18:38:28.229603 coreos-metadata[1579]: Dec 12 18:38:28.228 INFO Fetch successful Dec 12 18:38:28.240024 unknown[1579]: wrote ssh authorized keys file for user: core Dec 12 18:38:28.284527 update-ssh-keys[1586]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:38:28.273837 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:38:28.286976 systemd[1]: Finished sshkeys.service. Dec 12 18:38:28.356854 locksmithd[1548]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:38:28.403741 systemd-logind[1510]: New seat seat0. Dec 12 18:38:28.407655 systemd-networkd[1419]: eth0: Gained IPv6LL Dec 12 18:38:28.408938 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:38:28.416115 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:38:28.418697 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:38:28.428879 containerd[1527]: time="2025-12-12T18:38:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:38:28.425834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:28.429898 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:38:28.433051 containerd[1527]: time="2025-12-12T18:38:28.430280729Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.483832966Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.276µs" Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.483872922Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.483892930Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484064843Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484079432Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484104077Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484167288Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484183472Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484406811Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484421122Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484431253Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:38:28.485991 containerd[1527]: time="2025-12-12T18:38:28.484439536Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:38:28.491767 containerd[1527]: time="2025-12-12T18:38:28.491707813Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:38:28.492435 containerd[1527]: time="2025-12-12T18:38:28.492392334Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:38:28.493754 containerd[1527]: time="2025-12-12T18:38:28.493718418Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:38:28.493889 containerd[1527]: time="2025-12-12T18:38:28.493869610Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:38:28.494620 containerd[1527]: time="2025-12-12T18:38:28.494588902Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:38:28.498788 containerd[1527]: time="2025-12-12T18:38:28.495953571Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:38:28.498788 containerd[1527]: time="2025-12-12T18:38:28.496105701Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512665031Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512774302Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512795904Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512866562Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512888018Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512904603Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512926123Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512952774Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512969501Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.512984721Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.513008461Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.513039649Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.513220165Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:38:28.513517 containerd[1527]: time="2025-12-12T18:38:28.513245711Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513268082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513297219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513316017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513330622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513345979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513360553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513376954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513393453Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:38:28.514006 containerd[1527]: time="2025-12-12T18:38:28.513409557Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:38:28.519788 containerd[1527]: time="2025-12-12T18:38:28.517867298Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:38:28.519788 containerd[1527]: time="2025-12-12T18:38:28.518009057Z" level=info msg="Start snapshots syncer" Dec 12 18:38:28.519788 containerd[1527]: time="2025-12-12T18:38:28.518095787Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:38:28.519380 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:38:28.527361 containerd[1527]: time="2025-12-12T18:38:28.527218854Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.530602114Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.530801004Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.532181285Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.532265746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.532292833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.532321815Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.532350458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.532369576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:38:28.535788 containerd[1527]: time="2025-12-12T18:38:28.532399056Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.532477833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539245968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539282203Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539377689Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539409092Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539512235Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539530554Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539544132Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539560947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539587139Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539612996Z" level=info msg="runtime interface created" Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539622400Z" level=info msg="created NRI interface" Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539634556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539656675Z" level=info msg="Connect containerd service" Dec 12 18:38:28.539991 containerd[1527]: time="2025-12-12T18:38:28.539688508Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:38:28.546525 containerd[1527]: time="2025-12-12T18:38:28.543251314Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:38:28.643628 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:38:28.661238 systemd-logind[1510]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:38:28.752080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:28.890574 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:38:28.956031 containerd[1527]: time="2025-12-12T18:38:28.952272659Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:38:28.956031 containerd[1527]: time="2025-12-12T18:38:28.952364640Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:38:28.956031 containerd[1527]: time="2025-12-12T18:38:28.952379486Z" level=info msg="Start subscribing containerd event" Dec 12 18:38:28.956031 containerd[1527]: time="2025-12-12T18:38:28.952403102Z" level=info msg="Start recovering state" Dec 12 18:38:28.963190 containerd[1527]: time="2025-12-12T18:38:28.952497686Z" level=info msg="Start event monitor" Dec 12 18:38:28.963190 containerd[1527]: time="2025-12-12T18:38:28.961421915Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:38:28.963190 containerd[1527]: time="2025-12-12T18:38:28.961444373Z" level=info msg="Start streaming server" Dec 12 18:38:28.963190 containerd[1527]: time="2025-12-12T18:38:28.961470838Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:38:28.963190 containerd[1527]: time="2025-12-12T18:38:28.961481690Z" level=info msg="runtime interface starting up..." Dec 12 18:38:28.963190 containerd[1527]: time="2025-12-12T18:38:28.961505886Z" level=info msg="starting plugins..." Dec 12 18:38:28.963190 containerd[1527]: time="2025-12-12T18:38:28.961533302Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:38:28.963190 containerd[1527]: time="2025-12-12T18:38:28.961686147Z" level=info msg="containerd successfully booted in 0.540598s" Dec 12 18:38:28.964413 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:38:28.961950 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:38:28.972920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:29.030690 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:38:29.034838 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:38:29.114578 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:38:29.114806 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:38:29.122088 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:38:29.124523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:29.124697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:29.125596 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:29.132830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:29.137891 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:29.181165 systemd-networkd[1419]: eth1: Gained IPv6LL Dec 12 18:38:29.184036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:38:29.185115 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:29.191525 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:38:29.200334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:38:29.204650 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:38:29.217717 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:38:29.230196 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:38:29.232894 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:38:29.327889 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:38:29.335872 tar[1520]: linux-amd64/README.md Dec 12 18:38:29.357095 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:38:30.162012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:30.164615 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:38:30.167648 systemd[1]: Startup finished in 4.112s (kernel) + 6.353s (initrd) + 6.209s (userspace) = 16.675s. Dec 12 18:38:30.175753 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:38:30.906999 kubelet[1669]: E1212 18:38:30.906900 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:38:30.909585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:38:30.910059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:38:30.910765 systemd[1]: kubelet.service: Consumed 1.514s CPU time, 264.3M memory peak. Dec 12 18:38:31.493966 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:38:31.496685 systemd[1]: Started sshd@0-134.199.209.86:22-147.75.109.163:48954.service - OpenSSH per-connection server daemon (147.75.109.163:48954). Dec 12 18:38:31.607284 sshd[1681]: Accepted publickey for core from 147.75.109.163 port 48954 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:31.610329 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:31.619672 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:38:31.620885 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:38:31.630598 systemd-logind[1510]: New session 1 of user core. Dec 12 18:38:31.651400 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:38:31.654902 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:38:31.673478 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:38:31.677055 systemd-logind[1510]: New session c1 of user core. Dec 12 18:38:31.826239 systemd[1686]: Queued start job for default target default.target. Dec 12 18:38:31.834717 systemd[1686]: Created slice app.slice - User Application Slice. Dec 12 18:38:31.834746 systemd[1686]: Reached target paths.target - Paths. Dec 12 18:38:31.834792 systemd[1686]: Reached target timers.target - Timers. Dec 12 18:38:31.836125 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:38:31.851632 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:38:31.851817 systemd[1686]: Reached target sockets.target - Sockets. Dec 12 18:38:31.851868 systemd[1686]: Reached target basic.target - Basic System. Dec 12 18:38:31.851905 systemd[1686]: Reached target default.target - Main User Target. Dec 12 18:38:31.851934 systemd[1686]: Startup finished in 166ms. Dec 12 18:38:31.852102 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:38:31.865826 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:38:31.931786 systemd[1]: Started sshd@1-134.199.209.86:22-147.75.109.163:48956.service - OpenSSH per-connection server daemon (147.75.109.163:48956). Dec 12 18:38:31.999117 sshd[1697]: Accepted publickey for core from 147.75.109.163 port 48956 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:32.000630 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:32.005824 systemd-logind[1510]: New session 2 of user core. Dec 12 18:38:32.013823 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:38:32.074034 sshd[1700]: Connection closed by 147.75.109.163 port 48956 Dec 12 18:38:32.073930 sshd-session[1697]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:32.083339 systemd[1]: sshd@1-134.199.209.86:22-147.75.109.163:48956.service: Deactivated successfully. Dec 12 18:38:32.086851 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:38:32.088620 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:38:32.092080 systemd[1]: Started sshd@2-134.199.209.86:22-147.75.109.163:48960.service - OpenSSH per-connection server daemon (147.75.109.163:48960). Dec 12 18:38:32.093559 systemd-logind[1510]: Removed session 2. Dec 12 18:38:32.159090 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 48960 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:32.160480 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:32.168606 systemd-logind[1510]: New session 3 of user core. Dec 12 18:38:32.174774 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:38:32.228798 sshd[1709]: Connection closed by 147.75.109.163 port 48960 Dec 12 18:38:32.229345 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:32.239656 systemd[1]: sshd@2-134.199.209.86:22-147.75.109.163:48960.service: Deactivated successfully. Dec 12 18:38:32.241954 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:38:32.243026 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:38:32.246389 systemd[1]: Started sshd@3-134.199.209.86:22-147.75.109.163:48972.service - OpenSSH per-connection server daemon (147.75.109.163:48972). Dec 12 18:38:32.247935 systemd-logind[1510]: Removed session 3. Dec 12 18:38:32.316772 sshd[1715]: Accepted publickey for core from 147.75.109.163 port 48972 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:32.318375 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:32.323947 systemd-logind[1510]: New session 4 of user core. Dec 12 18:38:32.335834 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:38:32.397580 sshd[1718]: Connection closed by 147.75.109.163 port 48972 Dec 12 18:38:32.398458 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:32.411748 systemd[1]: sshd@3-134.199.209.86:22-147.75.109.163:48972.service: Deactivated successfully. Dec 12 18:38:32.414171 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:38:32.415568 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:38:32.419520 systemd[1]: Started sshd@4-134.199.209.86:22-147.75.109.163:44614.service - OpenSSH per-connection server daemon (147.75.109.163:44614). Dec 12 18:38:32.420705 systemd-logind[1510]: Removed session 4. Dec 12 18:38:32.497169 sshd[1724]: Accepted publickey for core from 147.75.109.163 port 44614 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:38:32.499110 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:32.506153 systemd-logind[1510]: New session 5 of user core. Dec 12 18:38:32.515817 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:38:32.587462 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:38:32.587844 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:38:33.246785 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:38:33.270594 (dockerd)[1746]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:38:33.762518 dockerd[1746]: time="2025-12-12T18:38:33.762240754Z" level=info msg="Starting up" Dec 12 18:38:33.764798 dockerd[1746]: time="2025-12-12T18:38:33.764759068Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:38:33.788012 dockerd[1746]: time="2025-12-12T18:38:33.787919794Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:38:33.837920 dockerd[1746]: time="2025-12-12T18:38:33.837702416Z" level=info msg="Loading containers: start." Dec 12 18:38:33.853519 kernel: Initializing XFRM netlink socket Dec 12 18:38:34.192104 systemd-networkd[1419]: docker0: Link UP Dec 12 18:38:34.196743 dockerd[1746]: time="2025-12-12T18:38:34.196667123Z" level=info msg="Loading containers: done." Dec 12 18:38:34.217398 dockerd[1746]: time="2025-12-12T18:38:34.217307825Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:38:34.217747 dockerd[1746]: time="2025-12-12T18:38:34.217427920Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:38:34.217747 dockerd[1746]: time="2025-12-12T18:38:34.217577208Z" level=info msg="Initializing buildkit" Dec 12 18:38:34.253605 dockerd[1746]: time="2025-12-12T18:38:34.253545674Z" level=info msg="Completed buildkit initialization" Dec 12 18:38:34.263306 dockerd[1746]: time="2025-12-12T18:38:34.263219996Z" level=info msg="Daemon has completed initialization" Dec 12 18:38:34.264300 dockerd[1746]: time="2025-12-12T18:38:34.263667051Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:38:34.263851 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:38:35.161559 containerd[1527]: time="2025-12-12T18:38:35.161039596Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:38:35.802207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527642021.mount: Deactivated successfully. Dec 12 18:38:37.240151 containerd[1527]: time="2025-12-12T18:38:37.240077303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:37.241524 containerd[1527]: time="2025-12-12T18:38:37.241244772Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 12 18:38:37.242640 containerd[1527]: time="2025-12-12T18:38:37.242579826Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:37.246601 containerd[1527]: time="2025-12-12T18:38:37.246551279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:37.248442 containerd[1527]: time="2025-12-12T18:38:37.248385881Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.087296681s" Dec 12 18:38:37.248656 containerd[1527]: time="2025-12-12T18:38:37.248632457Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:38:37.249797 containerd[1527]: time="2025-12-12T18:38:37.249739019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:38:39.206530 containerd[1527]: time="2025-12-12T18:38:39.205811267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:39.207514 containerd[1527]: time="2025-12-12T18:38:39.207453172Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 12 18:38:39.209086 containerd[1527]: time="2025-12-12T18:38:39.209035626Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:39.212130 containerd[1527]: time="2025-12-12T18:38:39.212092002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:39.212725 containerd[1527]: time="2025-12-12T18:38:39.212700624Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.962923854s" Dec 12 18:38:39.212821 containerd[1527]: time="2025-12-12T18:38:39.212809079Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:38:39.213687 containerd[1527]: time="2025-12-12T18:38:39.213665595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:38:40.594530 containerd[1527]: time="2025-12-12T18:38:40.593574962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:40.595570 containerd[1527]: time="2025-12-12T18:38:40.595527670Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 12 18:38:40.596865 containerd[1527]: time="2025-12-12T18:38:40.596819805Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:40.600005 containerd[1527]: time="2025-12-12T18:38:40.599942412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:40.601348 containerd[1527]: time="2025-12-12T18:38:40.601304795Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.387480567s" Dec 12 18:38:40.601521 containerd[1527]: time="2025-12-12T18:38:40.601503589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:38:40.602726 containerd[1527]: time="2025-12-12T18:38:40.602648681Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:38:41.098077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:38:41.101782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:41.392264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:41.403242 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:38:41.497425 kubelet[2041]: E1212 18:38:41.497313 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:38:41.502270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:38:41.502535 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:38:41.503066 systemd[1]: kubelet.service: Consumed 286ms CPU time, 110.2M memory peak. Dec 12 18:38:42.014299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2463545379.mount: Deactivated successfully. Dec 12 18:38:42.809859 containerd[1527]: time="2025-12-12T18:38:42.809778819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:42.810879 containerd[1527]: time="2025-12-12T18:38:42.810614787Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 12 18:38:42.811598 containerd[1527]: time="2025-12-12T18:38:42.811561729Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:42.813831 containerd[1527]: time="2025-12-12T18:38:42.813780938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:42.814575 containerd[1527]: time="2025-12-12T18:38:42.814359269Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 2.211675638s" Dec 12 18:38:42.814575 containerd[1527]: time="2025-12-12T18:38:42.814397746Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:38:42.815143 containerd[1527]: time="2025-12-12T18:38:42.815105436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:38:42.817802 systemd-resolved[1381]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Dec 12 18:38:43.637944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582985554.mount: Deactivated successfully. Dec 12 18:38:44.746399 containerd[1527]: time="2025-12-12T18:38:44.744820876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:44.746399 containerd[1527]: time="2025-12-12T18:38:44.746333603Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 12 18:38:44.747244 containerd[1527]: time="2025-12-12T18:38:44.747200293Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:44.750332 containerd[1527]: time="2025-12-12T18:38:44.750280609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:44.751774 containerd[1527]: time="2025-12-12T18:38:44.751733810Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.936004555s" Dec 12 18:38:44.751923 containerd[1527]: time="2025-12-12T18:38:44.751904329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:38:44.752997 containerd[1527]: time="2025-12-12T18:38:44.752945011Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:38:45.184273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803042595.mount: Deactivated successfully. Dec 12 18:38:45.189919 containerd[1527]: time="2025-12-12T18:38:45.189833954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:45.191557 containerd[1527]: time="2025-12-12T18:38:45.191351899Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:38:45.192464 containerd[1527]: time="2025-12-12T18:38:45.192390933Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:45.194780 containerd[1527]: time="2025-12-12T18:38:45.194715079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:38:45.196260 containerd[1527]: time="2025-12-12T18:38:45.195699512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 442.709015ms" Dec 12 18:38:45.196260 containerd[1527]: time="2025-12-12T18:38:45.195764126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:38:45.196411 containerd[1527]: time="2025-12-12T18:38:45.196387641Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:38:45.879711 systemd-resolved[1381]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Dec 12 18:38:45.902448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339800231.mount: Deactivated successfully. Dec 12 18:38:48.336564 containerd[1527]: time="2025-12-12T18:38:48.336403943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:48.338279 containerd[1527]: time="2025-12-12T18:38:48.337746843Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 12 18:38:48.339228 containerd[1527]: time="2025-12-12T18:38:48.339175675Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:48.344004 containerd[1527]: time="2025-12-12T18:38:48.343943951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:38:48.345618 containerd[1527]: time="2025-12-12T18:38:48.345566135Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.149136812s" Dec 12 18:38:48.345822 containerd[1527]: time="2025-12-12T18:38:48.345794806Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:38:51.597965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:38:51.603910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:51.830638 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:38:51.830993 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:38:51.831561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:51.831938 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.5M memory peak. Dec 12 18:38:51.835750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:51.880532 systemd[1]: Reload requested from client PID 2196 ('systemctl') (unit session-5.scope)... Dec 12 18:38:51.880787 systemd[1]: Reloading... Dec 12 18:38:52.034538 zram_generator::config[2239]: No configuration found. Dec 12 18:38:52.345828 systemd[1]: Reloading finished in 464 ms. Dec 12 18:38:52.421181 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:38:52.421319 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:38:52.421869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:52.421946 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.2M memory peak. Dec 12 18:38:52.424626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:38:52.617686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:38:52.631166 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:38:52.703082 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:38:52.703082 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:38:52.703082 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:38:52.703641 kubelet[2293]: I1212 18:38:52.703190 2293 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:38:53.160011 kubelet[2293]: I1212 18:38:53.159954 2293 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:38:53.160259 kubelet[2293]: I1212 18:38:53.160241 2293 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:38:53.160768 kubelet[2293]: I1212 18:38:53.160741 2293 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:38:53.197583 kubelet[2293]: I1212 18:38:53.197533 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:38:53.199074 kubelet[2293]: E1212 18:38:53.199012 2293 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://134.199.209.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 134.199.209.86:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:53.219411 kubelet[2293]: I1212 18:38:53.219346 2293 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:38:53.225531 kubelet[2293]: I1212 18:38:53.224876 2293 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:38:53.229292 kubelet[2293]: I1212 18:38:53.229209 2293 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:38:53.229568 kubelet[2293]: I1212 18:38:53.229292 2293 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-7-7f06ea9468","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:38:53.229771 kubelet[2293]: I1212 18:38:53.229582 2293 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:38:53.229771 kubelet[2293]: I1212 18:38:53.229595 2293 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:38:53.233158 kubelet[2293]: I1212 18:38:53.233055 2293 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:38:53.237659 kubelet[2293]: I1212 18:38:53.237582 2293 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:38:53.237850 kubelet[2293]: I1212 18:38:53.237689 2293 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:38:53.237850 kubelet[2293]: I1212 18:38:53.237734 2293 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:38:53.237850 kubelet[2293]: I1212 18:38:53.237751 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:38:53.245396 kubelet[2293]: W1212 18:38:53.245059 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.209.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-7-7f06ea9468&limit=500&resourceVersion=0": dial tcp 134.199.209.86:6443: connect: connection refused Dec 12 18:38:53.245396 kubelet[2293]: E1212 18:38:53.245150 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.209.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-7-7f06ea9468&limit=500&resourceVersion=0\": dial tcp 134.199.209.86:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:53.247097 kubelet[2293]: W1212 18:38:53.245870 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.209.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.209.86:6443: connect: connection refused Dec 12 18:38:53.247097 kubelet[2293]: E1212 18:38:53.245941 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.209.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.209.86:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:53.247097 kubelet[2293]: I1212 18:38:53.246848 2293 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:38:53.251526 kubelet[2293]: I1212 18:38:53.251257 2293 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:38:53.252147 kubelet[2293]: W1212 18:38:53.252114 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:38:53.253978 kubelet[2293]: I1212 18:38:53.253668 2293 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:38:53.253978 kubelet[2293]: I1212 18:38:53.253722 2293 server.go:1287] "Started kubelet" Dec 12 18:38:53.255836 kubelet[2293]: I1212 18:38:53.255210 2293 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:38:53.257272 kubelet[2293]: I1212 18:38:53.256623 2293 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:38:53.260135 kubelet[2293]: I1212 18:38:53.259672 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:38:53.260957 kubelet[2293]: I1212 18:38:53.260882 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:38:53.261317 kubelet[2293]: I1212 18:38:53.261300 2293 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:38:53.267817 kubelet[2293]: E1212 18:38:53.264655 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://134.199.209.86:6443/api/v1/namespaces/default/events\": dial tcp 134.199.209.86:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-7-7f06ea9468.18808bc5418b2f82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-7-7f06ea9468,UID:ci-4459.2.2-7-7f06ea9468,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-7-7f06ea9468,},FirstTimestamp:2025-12-12 18:38:53.253693314 +0000 UTC m=+0.615862711,LastTimestamp:2025-12-12 18:38:53.253693314 +0000 UTC m=+0.615862711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-7-7f06ea9468,}" Dec 12 18:38:53.272886 kubelet[2293]: I1212 18:38:53.270046 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:38:53.274940 kubelet[2293]: I1212 18:38:53.274304 2293 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:38:53.274940 kubelet[2293]: E1212 18:38:53.274757 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" Dec 12 18:38:53.275234 kubelet[2293]: I1212 18:38:53.275208 2293 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:38:53.275296 kubelet[2293]: I1212 18:38:53.275289 2293 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:38:53.275796 kubelet[2293]: W1212 18:38:53.275745 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://134.199.209.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 134.199.209.86:6443: connect: connection refused Dec 12 18:38:53.275882 kubelet[2293]: E1212 18:38:53.275815 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://134.199.209.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 134.199.209.86:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:53.275986 kubelet[2293]: E1212 18:38:53.275951 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.209.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-7-7f06ea9468?timeout=10s\": dial tcp 134.199.209.86:6443: connect: connection refused" interval="200ms" Dec 12 18:38:53.278773 kubelet[2293]: I1212 18:38:53.278705 2293 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:38:53.279243 kubelet[2293]: E1212 18:38:53.279064 2293 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:38:53.280750 kubelet[2293]: I1212 18:38:53.279341 2293 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:38:53.282961 kubelet[2293]: I1212 18:38:53.282923 2293 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:38:53.321091 kubelet[2293]: I1212 18:38:53.321043 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:38:53.321091 kubelet[2293]: I1212 18:38:53.321078 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:38:53.321091 kubelet[2293]: I1212 18:38:53.321108 2293 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:38:53.322084 kubelet[2293]: I1212 18:38:53.322030 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:38:53.325281 kubelet[2293]: I1212 18:38:53.325243 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:38:53.325794 kubelet[2293]: I1212 18:38:53.325763 2293 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:38:53.325861 kubelet[2293]: I1212 18:38:53.325815 2293 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:38:53.325861 kubelet[2293]: I1212 18:38:53.325827 2293 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:38:53.325977 kubelet[2293]: E1212 18:38:53.325920 2293 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:38:53.331850 kubelet[2293]: W1212 18:38:53.331655 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.209.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.209.86:6443: connect: connection refused Dec 12 18:38:53.332403 kubelet[2293]: E1212 18:38:53.332369 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.209.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.209.86:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:53.375440 kubelet[2293]: E1212 18:38:53.375374 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" Dec 12 18:38:53.379228 kubelet[2293]: I1212 18:38:53.379159 2293 policy_none.go:49] "None policy: Start" Dec 12 18:38:53.379228 kubelet[2293]: I1212 18:38:53.379198 2293 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:38:53.379228 kubelet[2293]: I1212 18:38:53.379213 2293 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:38:53.388796 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:38:53.402370 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:38:53.408917 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:38:53.421280 kubelet[2293]: I1212 18:38:53.421132 2293 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:38:53.423734 kubelet[2293]: I1212 18:38:53.423394 2293 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:38:53.423887 kubelet[2293]: I1212 18:38:53.423804 2293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:38:53.424277 kubelet[2293]: I1212 18:38:53.424251 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:38:53.427760 kubelet[2293]: E1212 18:38:53.427723 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:38:53.427760 kubelet[2293]: E1212 18:38:53.427768 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-7-7f06ea9468\" not found" Dec 12 18:38:53.440154 systemd[1]: Created slice kubepods-burstable-pod264a12091e31d97baf0d208dea762a5d.slice - libcontainer container kubepods-burstable-pod264a12091e31d97baf0d208dea762a5d.slice. Dec 12 18:38:53.454988 kubelet[2293]: E1212 18:38:53.454236 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.458108 systemd[1]: Created slice kubepods-burstable-pod80ea2747ff03b91fed1b2a8e2211139e.slice - libcontainer container kubepods-burstable-pod80ea2747ff03b91fed1b2a8e2211139e.slice. Dec 12 18:38:53.462928 kubelet[2293]: E1212 18:38:53.462776 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.467848 systemd[1]: Created slice kubepods-burstable-pod28cf3664849ce40d4b1dc28459be0675.slice - libcontainer container kubepods-burstable-pod28cf3664849ce40d4b1dc28459be0675.slice. Dec 12 18:38:53.470571 kubelet[2293]: E1212 18:38:53.470472 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.477434 kubelet[2293]: E1212 18:38:53.477370 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.209.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-7-7f06ea9468?timeout=10s\": dial tcp 134.199.209.86:6443: connect: connection refused" interval="400ms" Dec 12 18:38:53.526230 kubelet[2293]: I1212 18:38:53.526178 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.526865 kubelet[2293]: E1212 18:38:53.526827 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.209.86:6443/api/v1/nodes\": dial tcp 134.199.209.86:6443: connect: connection refused" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576330 kubelet[2293]: I1212 18:38:53.576149 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/264a12091e31d97baf0d208dea762a5d-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-7-7f06ea9468\" (UID: \"264a12091e31d97baf0d208dea762a5d\") " pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576330 kubelet[2293]: I1212 18:38:53.576214 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576330 kubelet[2293]: I1212 18:38:53.576248 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28cf3664849ce40d4b1dc28459be0675-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-7-7f06ea9468\" (UID: \"28cf3664849ce40d4b1dc28459be0675\") " pod="kube-system/kube-scheduler-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576330 kubelet[2293]: I1212 18:38:53.576274 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/264a12091e31d97baf0d208dea762a5d-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-7-7f06ea9468\" (UID: \"264a12091e31d97baf0d208dea762a5d\") " pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576685 kubelet[2293]: I1212 18:38:53.576332 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/264a12091e31d97baf0d208dea762a5d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-7-7f06ea9468\" (UID: \"264a12091e31d97baf0d208dea762a5d\") " pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576685 kubelet[2293]: I1212 18:38:53.576411 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576685 kubelet[2293]: I1212 18:38:53.576449 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576685 kubelet[2293]: I1212 18:38:53.576481 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.576685 kubelet[2293]: I1212 18:38:53.576554 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.729121 kubelet[2293]: I1212 18:38:53.728369 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.729121 kubelet[2293]: E1212 18:38:53.728994 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.209.86:6443/api/v1/nodes\": dial tcp 134.199.209.86:6443: connect: connection refused" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:53.756150 kubelet[2293]: E1212 18:38:53.756018 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:53.757374 containerd[1527]: time="2025-12-12T18:38:53.757325584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-7-7f06ea9468,Uid:264a12091e31d97baf0d208dea762a5d,Namespace:kube-system,Attempt:0,}" Dec 12 18:38:53.763973 kubelet[2293]: E1212 18:38:53.763628 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:53.771260 containerd[1527]: time="2025-12-12T18:38:53.771103727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-7-7f06ea9468,Uid:80ea2747ff03b91fed1b2a8e2211139e,Namespace:kube-system,Attempt:0,}" Dec 12 18:38:53.772393 kubelet[2293]: E1212 18:38:53.772348 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:53.773873 containerd[1527]: time="2025-12-12T18:38:53.773813968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-7-7f06ea9468,Uid:28cf3664849ce40d4b1dc28459be0675,Namespace:kube-system,Attempt:0,}" Dec 12 18:38:53.878573 kubelet[2293]: E1212 18:38:53.878505 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://134.199.209.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-7-7f06ea9468?timeout=10s\": dial tcp 134.199.209.86:6443: connect: connection refused" interval="800ms" Dec 12 18:38:53.910195 containerd[1527]: time="2025-12-12T18:38:53.910116189Z" level=info msg="connecting to shim c739aabe65487725032f530caae054c48aaf5a2eacb554f2eb0fb6cfca6e051a" address="unix:///run/containerd/s/7af84992635fe696c33a1e440dffa486e8bd904819ab1e14db38db9547fd118b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:53.912928 containerd[1527]: time="2025-12-12T18:38:53.912849784Z" level=info msg="connecting to shim e374a1e46010065e8244d8ac4441e791690ddbc4cfb48af6a106b54db14a4494" address="unix:///run/containerd/s/fef91381826f98f0e631b0cb37f351f1273ec7f68382082e85576eea73002298" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:53.914017 containerd[1527]: time="2025-12-12T18:38:53.913940389Z" level=info msg="connecting to shim b79c7248a7ade5ea8636693646ca99eb240009d231aa5f4c51e418781ad42887" address="unix:///run/containerd/s/760e0e639df1d95eda2d5462f6d517b896a1bcdf3b51f14ad61a398e73e7b1ca" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:38:54.064025 systemd[1]: Started cri-containerd-b79c7248a7ade5ea8636693646ca99eb240009d231aa5f4c51e418781ad42887.scope - libcontainer container b79c7248a7ade5ea8636693646ca99eb240009d231aa5f4c51e418781ad42887. Dec 12 18:38:54.067917 systemd[1]: Started cri-containerd-c739aabe65487725032f530caae054c48aaf5a2eacb554f2eb0fb6cfca6e051a.scope - libcontainer container c739aabe65487725032f530caae054c48aaf5a2eacb554f2eb0fb6cfca6e051a. Dec 12 18:38:54.071703 systemd[1]: Started cri-containerd-e374a1e46010065e8244d8ac4441e791690ddbc4cfb48af6a106b54db14a4494.scope - libcontainer container e374a1e46010065e8244d8ac4441e791690ddbc4cfb48af6a106b54db14a4494. Dec 12 18:38:54.135391 kubelet[2293]: I1212 18:38:54.135299 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:54.137306 kubelet[2293]: E1212 18:38:54.137100 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://134.199.209.86:6443/api/v1/nodes\": dial tcp 134.199.209.86:6443: connect: connection refused" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:54.184888 kubelet[2293]: W1212 18:38:54.184732 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://134.199.209.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-7-7f06ea9468&limit=500&resourceVersion=0": dial tcp 134.199.209.86:6443: connect: connection refused Dec 12 18:38:54.185312 kubelet[2293]: E1212 18:38:54.184963 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://134.199.209.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-7-7f06ea9468&limit=500&resourceVersion=0\": dial tcp 134.199.209.86:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:54.224042 containerd[1527]: time="2025-12-12T18:38:54.223960291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-7-7f06ea9468,Uid:264a12091e31d97baf0d208dea762a5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b79c7248a7ade5ea8636693646ca99eb240009d231aa5f4c51e418781ad42887\"" Dec 12 18:38:54.232551 kubelet[2293]: E1212 18:38:54.232444 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:54.235199 containerd[1527]: time="2025-12-12T18:38:54.234549024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-7-7f06ea9468,Uid:80ea2747ff03b91fed1b2a8e2211139e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c739aabe65487725032f530caae054c48aaf5a2eacb554f2eb0fb6cfca6e051a\"" Dec 12 18:38:54.235346 kubelet[2293]: E1212 18:38:54.235296 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:54.236995 containerd[1527]: time="2025-12-12T18:38:54.236939594Z" level=info msg="CreateContainer within sandbox \"b79c7248a7ade5ea8636693646ca99eb240009d231aa5f4c51e418781ad42887\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:38:54.251588 containerd[1527]: time="2025-12-12T18:38:54.250641827Z" level=info msg="Container 24d23454d3be39d5e8c1c9f3c4b23a89fc11969fa7e46a9263ba8943fac5f9dc: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:54.262769 containerd[1527]: time="2025-12-12T18:38:54.262710122Z" level=info msg="CreateContainer within sandbox \"c739aabe65487725032f530caae054c48aaf5a2eacb554f2eb0fb6cfca6e051a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:38:54.268666 containerd[1527]: time="2025-12-12T18:38:54.268569693Z" level=info msg="CreateContainer within sandbox \"b79c7248a7ade5ea8636693646ca99eb240009d231aa5f4c51e418781ad42887\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"24d23454d3be39d5e8c1c9f3c4b23a89fc11969fa7e46a9263ba8943fac5f9dc\"" Dec 12 18:38:54.271056 containerd[1527]: time="2025-12-12T18:38:54.270927436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-7-7f06ea9468,Uid:28cf3664849ce40d4b1dc28459be0675,Namespace:kube-system,Attempt:0,} returns sandbox id \"e374a1e46010065e8244d8ac4441e791690ddbc4cfb48af6a106b54db14a4494\"" Dec 12 18:38:54.272261 containerd[1527]: time="2025-12-12T18:38:54.270986863Z" level=info msg="StartContainer for \"24d23454d3be39d5e8c1c9f3c4b23a89fc11969fa7e46a9263ba8943fac5f9dc\"" Dec 12 18:38:54.272417 kubelet[2293]: E1212 18:38:54.271979 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:54.274735 containerd[1527]: time="2025-12-12T18:38:54.274686043Z" level=info msg="CreateContainer within sandbox \"e374a1e46010065e8244d8ac4441e791690ddbc4cfb48af6a106b54db14a4494\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:38:54.276666 containerd[1527]: time="2025-12-12T18:38:54.276618343Z" level=info msg="connecting to shim 24d23454d3be39d5e8c1c9f3c4b23a89fc11969fa7e46a9263ba8943fac5f9dc" address="unix:///run/containerd/s/760e0e639df1d95eda2d5462f6d517b896a1bcdf3b51f14ad61a398e73e7b1ca" protocol=ttrpc version=3 Dec 12 18:38:54.281629 containerd[1527]: time="2025-12-12T18:38:54.281572345Z" level=info msg="Container a395901f5dc15d5b037f1452dfd4937de3cb98789368d446fc2ed3d1865ac375: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:54.291118 containerd[1527]: time="2025-12-12T18:38:54.291007568Z" level=info msg="Container 9718d68f22fffeab97a82b6fd7ff75b600067d4849a8cc5164bb6a2dab783518: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:38:54.298694 containerd[1527]: time="2025-12-12T18:38:54.298635520Z" level=info msg="CreateContainer within sandbox \"c739aabe65487725032f530caae054c48aaf5a2eacb554f2eb0fb6cfca6e051a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a395901f5dc15d5b037f1452dfd4937de3cb98789368d446fc2ed3d1865ac375\"" Dec 12 18:38:54.299932 containerd[1527]: time="2025-12-12T18:38:54.299696465Z" level=info msg="StartContainer for \"a395901f5dc15d5b037f1452dfd4937de3cb98789368d446fc2ed3d1865ac375\"" Dec 12 18:38:54.304449 containerd[1527]: time="2025-12-12T18:38:54.304394962Z" level=info msg="connecting to shim a395901f5dc15d5b037f1452dfd4937de3cb98789368d446fc2ed3d1865ac375" address="unix:///run/containerd/s/7af84992635fe696c33a1e440dffa486e8bd904819ab1e14db38db9547fd118b" protocol=ttrpc version=3 Dec 12 18:38:54.307916 containerd[1527]: time="2025-12-12T18:38:54.307854716Z" level=info msg="CreateContainer within sandbox \"e374a1e46010065e8244d8ac4441e791690ddbc4cfb48af6a106b54db14a4494\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9718d68f22fffeab97a82b6fd7ff75b600067d4849a8cc5164bb6a2dab783518\"" Dec 12 18:38:54.309938 containerd[1527]: time="2025-12-12T18:38:54.309891415Z" level=info msg="StartContainer for \"9718d68f22fffeab97a82b6fd7ff75b600067d4849a8cc5164bb6a2dab783518\"" Dec 12 18:38:54.312181 containerd[1527]: time="2025-12-12T18:38:54.311875840Z" level=info msg="connecting to shim 9718d68f22fffeab97a82b6fd7ff75b600067d4849a8cc5164bb6a2dab783518" address="unix:///run/containerd/s/fef91381826f98f0e631b0cb37f351f1273ec7f68382082e85576eea73002298" protocol=ttrpc version=3 Dec 12 18:38:54.313129 systemd[1]: Started cri-containerd-24d23454d3be39d5e8c1c9f3c4b23a89fc11969fa7e46a9263ba8943fac5f9dc.scope - libcontainer container 24d23454d3be39d5e8c1c9f3c4b23a89fc11969fa7e46a9263ba8943fac5f9dc. Dec 12 18:38:54.339860 systemd[1]: Started cri-containerd-a395901f5dc15d5b037f1452dfd4937de3cb98789368d446fc2ed3d1865ac375.scope - libcontainer container a395901f5dc15d5b037f1452dfd4937de3cb98789368d446fc2ed3d1865ac375. Dec 12 18:38:54.377763 systemd[1]: Started cri-containerd-9718d68f22fffeab97a82b6fd7ff75b600067d4849a8cc5164bb6a2dab783518.scope - libcontainer container 9718d68f22fffeab97a82b6fd7ff75b600067d4849a8cc5164bb6a2dab783518. Dec 12 18:38:54.398765 kubelet[2293]: W1212 18:38:54.398726 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://134.199.209.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 134.199.209.86:6443: connect: connection refused Dec 12 18:38:54.398991 kubelet[2293]: E1212 18:38:54.398940 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://134.199.209.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 134.199.209.86:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:54.467091 containerd[1527]: time="2025-12-12T18:38:54.465171787Z" level=info msg="StartContainer for \"24d23454d3be39d5e8c1c9f3c4b23a89fc11969fa7e46a9263ba8943fac5f9dc\" returns successfully" Dec 12 18:38:54.494380 containerd[1527]: time="2025-12-12T18:38:54.494277261Z" level=info msg="StartContainer for \"a395901f5dc15d5b037f1452dfd4937de3cb98789368d446fc2ed3d1865ac375\" returns successfully" Dec 12 18:38:54.546978 containerd[1527]: time="2025-12-12T18:38:54.546925133Z" level=info msg="StartContainer for \"9718d68f22fffeab97a82b6fd7ff75b600067d4849a8cc5164bb6a2dab783518\" returns successfully" Dec 12 18:38:54.560883 kubelet[2293]: W1212 18:38:54.560799 2293 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://134.199.209.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 134.199.209.86:6443: connect: connection refused Dec 12 18:38:54.560883 kubelet[2293]: E1212 18:38:54.560889 2293 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://134.199.209.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 134.199.209.86:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:38:54.941554 kubelet[2293]: I1212 18:38:54.940631 2293 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:55.370531 kubelet[2293]: E1212 18:38:55.369330 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:55.370959 kubelet[2293]: E1212 18:38:55.370937 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:55.375313 kubelet[2293]: E1212 18:38:55.375280 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:55.375887 kubelet[2293]: E1212 18:38:55.375856 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:55.380149 kubelet[2293]: E1212 18:38:55.380117 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:55.380270 kubelet[2293]: E1212 18:38:55.380260 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:56.384475 kubelet[2293]: E1212 18:38:56.384426 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:56.384953 kubelet[2293]: E1212 18:38:56.384638 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:56.384995 kubelet[2293]: E1212 18:38:56.384970 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:56.385102 kubelet[2293]: E1212 18:38:56.385082 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:56.388422 kubelet[2293]: E1212 18:38:56.388381 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:56.388820 kubelet[2293]: E1212 18:38:56.388757 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:57.388868 kubelet[2293]: E1212 18:38:57.388822 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.389369 kubelet[2293]: E1212 18:38:57.389013 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:57.389369 kubelet[2293]: E1212 18:38:57.389293 2293 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.389552 kubelet[2293]: E1212 18:38:57.389528 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:57.931501 kubelet[2293]: E1212 18:38:57.931431 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-7-7f06ea9468\" not found" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.960516 kubelet[2293]: I1212 18:38:57.958985 2293 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.975757 kubelet[2293]: I1212 18:38:57.975709 2293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.991532 kubelet[2293]: E1212 18:38:57.991450 2293 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-7-7f06ea9468\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.991532 kubelet[2293]: I1212 18:38:57.991527 2293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.994515 kubelet[2293]: E1212 18:38:57.994460 2293 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-7-7f06ea9468\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.994515 kubelet[2293]: I1212 18:38:57.994511 2293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:57.998893 kubelet[2293]: E1212 18:38:57.998849 2293 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:58.247904 kubelet[2293]: I1212 18:38:58.247404 2293 apiserver.go:52] "Watching apiserver" Dec 12 18:38:58.275629 kubelet[2293]: I1212 18:38:58.275581 2293 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:38:59.330039 kubelet[2293]: I1212 18:38:59.329912 2293 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:38:59.340919 kubelet[2293]: W1212 18:38:59.340870 2293 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:38:59.341335 kubelet[2293]: E1212 18:38:59.341311 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:38:59.391095 kubelet[2293]: E1212 18:38:59.390629 2293 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:00.138415 systemd[1]: Reload requested from client PID 2566 ('systemctl') (unit session-5.scope)... Dec 12 18:39:00.138444 systemd[1]: Reloading... Dec 12 18:39:00.269025 zram_generator::config[2605]: No configuration found. Dec 12 18:39:00.788017 systemd[1]: Reloading finished in 648 ms. Dec 12 18:39:00.824355 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:00.841233 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:39:00.841677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:00.841783 systemd[1]: kubelet.service: Consumed 1.214s CPU time, 127.5M memory peak. Dec 12 18:39:00.844993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:39:01.137799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:39:01.151919 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:39:01.251915 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:01.252762 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:39:01.252762 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:39:01.252762 kubelet[2660]: I1212 18:39:01.252344 2660 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:39:01.278600 kubelet[2660]: I1212 18:39:01.278541 2660 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:39:01.279510 kubelet[2660]: I1212 18:39:01.278797 2660 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:39:01.279510 kubelet[2660]: I1212 18:39:01.279321 2660 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:39:01.282999 kubelet[2660]: I1212 18:39:01.282951 2660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:39:01.289178 kubelet[2660]: I1212 18:39:01.289114 2660 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:39:01.298578 kubelet[2660]: I1212 18:39:01.298541 2660 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:39:01.307980 kubelet[2660]: I1212 18:39:01.307902 2660 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:39:01.309521 kubelet[2660]: I1212 18:39:01.309101 2660 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:39:01.309521 kubelet[2660]: I1212 18:39:01.309154 2660 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-7-7f06ea9468","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:39:01.309521 kubelet[2660]: I1212 18:39:01.309402 2660 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:39:01.309521 kubelet[2660]: I1212 18:39:01.309415 2660 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:39:01.310585 kubelet[2660]: I1212 18:39:01.310016 2660 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:01.310585 kubelet[2660]: I1212 18:39:01.310280 2660 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:39:01.310585 kubelet[2660]: I1212 18:39:01.310310 2660 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:39:01.310585 kubelet[2660]: I1212 18:39:01.310371 2660 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:39:01.310585 kubelet[2660]: I1212 18:39:01.310385 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:39:01.313017 kubelet[2660]: I1212 18:39:01.312992 2660 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:39:01.313767 kubelet[2660]: I1212 18:39:01.313737 2660 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:39:01.314565 kubelet[2660]: I1212 18:39:01.314543 2660 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:39:01.314698 kubelet[2660]: I1212 18:39:01.314688 2660 server.go:1287] "Started kubelet" Dec 12 18:39:01.320521 kubelet[2660]: I1212 18:39:01.320428 2660 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:39:01.329535 kubelet[2660]: I1212 18:39:01.328854 2660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:39:01.329535 kubelet[2660]: I1212 18:39:01.329310 2660 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:39:01.330735 kubelet[2660]: I1212 18:39:01.330707 2660 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:39:01.339234 kubelet[2660]: I1212 18:39:01.339198 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:39:01.345061 kubelet[2660]: I1212 18:39:01.345018 2660 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:39:01.347223 kubelet[2660]: I1212 18:39:01.347187 2660 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:39:01.347728 kubelet[2660]: E1212 18:39:01.347705 2660 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-7-7f06ea9468\" not found" Dec 12 18:39:01.348813 kubelet[2660]: I1212 18:39:01.348666 2660 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:39:01.348988 kubelet[2660]: I1212 18:39:01.348974 2660 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:39:01.361272 kubelet[2660]: I1212 18:39:01.360418 2660 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:39:01.364904 kubelet[2660]: I1212 18:39:01.364753 2660 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:39:01.404645 kubelet[2660]: I1212 18:39:01.404393 2660 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:39:01.410885 kubelet[2660]: I1212 18:39:01.410711 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:39:01.412852 kubelet[2660]: I1212 18:39:01.412511 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:39:01.412852 kubelet[2660]: I1212 18:39:01.412554 2660 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:39:01.412852 kubelet[2660]: I1212 18:39:01.412576 2660 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:39:01.412852 kubelet[2660]: I1212 18:39:01.412583 2660 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:39:01.412852 kubelet[2660]: E1212 18:39:01.412636 2660 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:39:01.426645 kubelet[2660]: E1212 18:39:01.426575 2660 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:39:01.504270 kubelet[2660]: I1212 18:39:01.504018 2660 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:39:01.504270 kubelet[2660]: I1212 18:39:01.504049 2660 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:39:01.504270 kubelet[2660]: I1212 18:39:01.504085 2660 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:39:01.506367 kubelet[2660]: I1212 18:39:01.505468 2660 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:39:01.506367 kubelet[2660]: I1212 18:39:01.505525 2660 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:39:01.506367 kubelet[2660]: I1212 18:39:01.505556 2660 policy_none.go:49] "None policy: Start" Dec 12 18:39:01.506367 kubelet[2660]: I1212 18:39:01.505572 2660 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:39:01.506367 kubelet[2660]: I1212 18:39:01.505588 2660 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:39:01.506367 kubelet[2660]: I1212 18:39:01.505763 2660 state_mem.go:75] "Updated machine memory state" Dec 12 18:39:01.512893 kubelet[2660]: E1212 18:39:01.512853 2660 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:39:01.518859 kubelet[2660]: I1212 18:39:01.518321 2660 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:39:01.518859 kubelet[2660]: I1212 18:39:01.518577 2660 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:39:01.518859 kubelet[2660]: I1212 18:39:01.518594 2660 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:39:01.522446 kubelet[2660]: I1212 18:39:01.522413 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:39:01.524322 kubelet[2660]: E1212 18:39:01.524285 2660 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:39:01.626846 kubelet[2660]: I1212 18:39:01.626019 2660 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.642906 kubelet[2660]: I1212 18:39:01.642844 2660 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.644177 kubelet[2660]: I1212 18:39:01.644141 2660 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.715932 kubelet[2660]: I1212 18:39:01.715514 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.717958 kubelet[2660]: I1212 18:39:01.717866 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.720185 kubelet[2660]: I1212 18:39:01.718536 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.732126 kubelet[2660]: W1212 18:39:01.731038 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:01.732692 kubelet[2660]: W1212 18:39:01.732661 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:01.732949 kubelet[2660]: E1212 18:39:01.732928 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.739712 kubelet[2660]: W1212 18:39:01.738749 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:01.754559 kubelet[2660]: I1212 18:39:01.752702 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/264a12091e31d97baf0d208dea762a5d-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-7-7f06ea9468\" (UID: \"264a12091e31d97baf0d208dea762a5d\") " pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.754559 kubelet[2660]: I1212 18:39:01.752885 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/264a12091e31d97baf0d208dea762a5d-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-7-7f06ea9468\" (UID: \"264a12091e31d97baf0d208dea762a5d\") " pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.754559 kubelet[2660]: I1212 18:39:01.752927 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.754559 kubelet[2660]: I1212 18:39:01.752961 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.754559 kubelet[2660]: I1212 18:39:01.753005 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/28cf3664849ce40d4b1dc28459be0675-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-7-7f06ea9468\" (UID: \"28cf3664849ce40d4b1dc28459be0675\") " pod="kube-system/kube-scheduler-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.755756 kubelet[2660]: I1212 18:39:01.753052 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/264a12091e31d97baf0d208dea762a5d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-7-7f06ea9468\" (UID: \"264a12091e31d97baf0d208dea762a5d\") " pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.756866 kubelet[2660]: I1212 18:39:01.755948 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.756866 kubelet[2660]: I1212 18:39:01.756004 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:01.756866 kubelet[2660]: I1212 18:39:01.756032 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80ea2747ff03b91fed1b2a8e2211139e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-7-7f06ea9468\" (UID: \"80ea2747ff03b91fed1b2a8e2211139e\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:02.032975 kubelet[2660]: E1212 18:39:02.032361 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:02.034104 kubelet[2660]: E1212 18:39:02.034062 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:02.039250 kubelet[2660]: E1212 18:39:02.039188 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:02.312918 kubelet[2660]: I1212 18:39:02.312679 2660 apiserver.go:52] "Watching apiserver" Dec 12 18:39:02.349694 kubelet[2660]: I1212 18:39:02.349609 2660 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:39:02.436512 kubelet[2660]: I1212 18:39:02.436421 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-7-7f06ea9468" podStartSLOduration=3.436404635 podStartE2EDuration="3.436404635s" podCreationTimestamp="2025-12-12 18:38:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:02.436312833 +0000 UTC m=+1.274089330" watchObservedRunningTime="2025-12-12 18:39:02.436404635 +0000 UTC m=+1.274181125" Dec 12 18:39:02.448460 kubelet[2660]: I1212 18:39:02.448381 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" podStartSLOduration=1.448356689 podStartE2EDuration="1.448356689s" podCreationTimestamp="2025-12-12 18:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:02.447988109 +0000 UTC m=+1.285764606" watchObservedRunningTime="2025-12-12 18:39:02.448356689 +0000 UTC m=+1.286133178" Dec 12 18:39:02.462057 kubelet[2660]: I1212 18:39:02.461933 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:02.463634 kubelet[2660]: E1212 18:39:02.463549 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:02.464475 kubelet[2660]: I1212 18:39:02.464440 2660 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:02.483447 kubelet[2660]: W1212 18:39:02.483273 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:02.483447 kubelet[2660]: E1212 18:39:02.483345 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-7-7f06ea9468\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:02.483801 kubelet[2660]: E1212 18:39:02.483538 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:02.486125 kubelet[2660]: W1212 18:39:02.486015 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 12 18:39:02.486125 kubelet[2660]: E1212 18:39:02.486093 2660 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-7-7f06ea9468\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-7-7f06ea9468" Dec 12 18:39:02.486662 kubelet[2660]: E1212 18:39:02.486615 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:02.509519 kubelet[2660]: I1212 18:39:02.509356 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-7-7f06ea9468" podStartSLOduration=1.509335832 podStartE2EDuration="1.509335832s" podCreationTimestamp="2025-12-12 18:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:02.472712118 +0000 UTC m=+1.310488608" watchObservedRunningTime="2025-12-12 18:39:02.509335832 +0000 UTC m=+1.347112320" Dec 12 18:39:02.757449 sudo[1728]: pam_unix(sudo:session): session closed for user root Dec 12 18:39:02.765304 sshd[1727]: Connection closed by 147.75.109.163 port 44614 Dec 12 18:39:02.768139 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:02.773970 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:39:02.775270 systemd[1]: sshd@4-134.199.209.86:22-147.75.109.163:44614.service: Deactivated successfully. Dec 12 18:39:02.779055 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:39:02.779448 systemd[1]: session-5.scope: Consumed 5.041s CPU time, 164.5M memory peak. Dec 12 18:39:02.783125 systemd-logind[1510]: Removed session 5. Dec 12 18:39:03.465191 kubelet[2660]: E1212 18:39:03.464800 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:03.465191 kubelet[2660]: E1212 18:39:03.465024 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:05.094386 kubelet[2660]: I1212 18:39:05.094304 2660 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:39:05.096640 containerd[1527]: time="2025-12-12T18:39:05.096546658Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:39:05.098433 kubelet[2660]: I1212 18:39:05.098382 2660 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:39:05.259837 kubelet[2660]: E1212 18:39:05.259257 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:05.472611 kubelet[2660]: E1212 18:39:05.470382 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:05.801072 systemd[1]: Created slice kubepods-besteffort-pod8c31dfa9_1264_470d_a169_e1e57e4261ee.slice - libcontainer container kubepods-besteffort-pod8c31dfa9_1264_470d_a169_e1e57e4261ee.slice. Dec 12 18:39:05.840113 systemd[1]: Created slice kubepods-burstable-pod6f16a4dd_a7d3_4e66_8476_4acb704ea3f2.slice - libcontainer container kubepods-burstable-pod6f16a4dd_a7d3_4e66_8476_4acb704ea3f2.slice. Dec 12 18:39:05.885843 kubelet[2660]: I1212 18:39:05.885787 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp6x2\" (UniqueName: \"kubernetes.io/projected/6f16a4dd-a7d3-4e66-8476-4acb704ea3f2-kube-api-access-bp6x2\") pod \"kube-flannel-ds-j7kfc\" (UID: \"6f16a4dd-a7d3-4e66-8476-4acb704ea3f2\") " pod="kube-flannel/kube-flannel-ds-j7kfc" Dec 12 18:39:05.887239 kubelet[2660]: I1212 18:39:05.886128 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c31dfa9-1264-470d-a169-e1e57e4261ee-kube-proxy\") pod \"kube-proxy-vswpj\" (UID: \"8c31dfa9-1264-470d-a169-e1e57e4261ee\") " pod="kube-system/kube-proxy-vswpj" Dec 12 18:39:05.887567 kubelet[2660]: I1212 18:39:05.887534 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c31dfa9-1264-470d-a169-e1e57e4261ee-lib-modules\") pod \"kube-proxy-vswpj\" (UID: \"8c31dfa9-1264-470d-a169-e1e57e4261ee\") " pod="kube-system/kube-proxy-vswpj" Dec 12 18:39:05.887689 kubelet[2660]: I1212 18:39:05.887653 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c31dfa9-1264-470d-a169-e1e57e4261ee-xtables-lock\") pod \"kube-proxy-vswpj\" (UID: \"8c31dfa9-1264-470d-a169-e1e57e4261ee\") " pod="kube-system/kube-proxy-vswpj" Dec 12 18:39:05.887775 kubelet[2660]: I1212 18:39:05.887763 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6f16a4dd-a7d3-4e66-8476-4acb704ea3f2-cni\") pod \"kube-flannel-ds-j7kfc\" (UID: \"6f16a4dd-a7d3-4e66-8476-4acb704ea3f2\") " pod="kube-flannel/kube-flannel-ds-j7kfc" Dec 12 18:39:05.887941 kubelet[2660]: I1212 18:39:05.887878 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f16a4dd-a7d3-4e66-8476-4acb704ea3f2-xtables-lock\") pod \"kube-flannel-ds-j7kfc\" (UID: \"6f16a4dd-a7d3-4e66-8476-4acb704ea3f2\") " pod="kube-flannel/kube-flannel-ds-j7kfc" Dec 12 18:39:05.888080 kubelet[2660]: I1212 18:39:05.888022 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjlqg\" (UniqueName: \"kubernetes.io/projected/8c31dfa9-1264-470d-a169-e1e57e4261ee-kube-api-access-fjlqg\") pod \"kube-proxy-vswpj\" (UID: \"8c31dfa9-1264-470d-a169-e1e57e4261ee\") " pod="kube-system/kube-proxy-vswpj" Dec 12 18:39:05.888080 kubelet[2660]: I1212 18:39:05.888045 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6f16a4dd-a7d3-4e66-8476-4acb704ea3f2-flannel-cfg\") pod \"kube-flannel-ds-j7kfc\" (UID: \"6f16a4dd-a7d3-4e66-8476-4acb704ea3f2\") " pod="kube-flannel/kube-flannel-ds-j7kfc" Dec 12 18:39:05.888219 kubelet[2660]: I1212 18:39:05.888186 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6f16a4dd-a7d3-4e66-8476-4acb704ea3f2-run\") pod \"kube-flannel-ds-j7kfc\" (UID: \"6f16a4dd-a7d3-4e66-8476-4acb704ea3f2\") " pod="kube-flannel/kube-flannel-ds-j7kfc" Dec 12 18:39:05.888317 kubelet[2660]: I1212 18:39:05.888282 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6f16a4dd-a7d3-4e66-8476-4acb704ea3f2-cni-plugin\") pod \"kube-flannel-ds-j7kfc\" (UID: \"6f16a4dd-a7d3-4e66-8476-4acb704ea3f2\") " pod="kube-flannel/kube-flannel-ds-j7kfc" Dec 12 18:39:06.115557 kubelet[2660]: E1212 18:39:06.115320 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:06.117510 containerd[1527]: time="2025-12-12T18:39:06.117305162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vswpj,Uid:8c31dfa9-1264-470d-a169-e1e57e4261ee,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:06.146164 kubelet[2660]: E1212 18:39:06.146046 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:06.148182 containerd[1527]: time="2025-12-12T18:39:06.148116511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-j7kfc,Uid:6f16a4dd-a7d3-4e66-8476-4acb704ea3f2,Namespace:kube-flannel,Attempt:0,}" Dec 12 18:39:06.169530 containerd[1527]: time="2025-12-12T18:39:06.169241271Z" level=info msg="connecting to shim 637c14f23ad2523cf22c1ba531630fb6caf36e782173de9836869f27cc6c4a32" address="unix:///run/containerd/s/57f5d1ce8886374416a84d881d6a5d700312e46ba9f01cb3652262e9cae86285" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:06.207475 containerd[1527]: time="2025-12-12T18:39:06.207317882Z" level=info msg="connecting to shim d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a" address="unix:///run/containerd/s/1ea41c75381d4278b63f13fad76205a139abc79c30c5ebc3aa4fb0b59fb614d8" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:06.215868 systemd[1]: Started cri-containerd-637c14f23ad2523cf22c1ba531630fb6caf36e782173de9836869f27cc6c4a32.scope - libcontainer container 637c14f23ad2523cf22c1ba531630fb6caf36e782173de9836869f27cc6c4a32. Dec 12 18:39:06.276025 systemd[1]: Started cri-containerd-d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a.scope - libcontainer container d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a. Dec 12 18:39:06.310718 containerd[1527]: time="2025-12-12T18:39:06.310656714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vswpj,Uid:8c31dfa9-1264-470d-a169-e1e57e4261ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"637c14f23ad2523cf22c1ba531630fb6caf36e782173de9836869f27cc6c4a32\"" Dec 12 18:39:06.314305 kubelet[2660]: E1212 18:39:06.314223 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:06.320299 containerd[1527]: time="2025-12-12T18:39:06.320138668Z" level=info msg="CreateContainer within sandbox \"637c14f23ad2523cf22c1ba531630fb6caf36e782173de9836869f27cc6c4a32\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:39:06.338994 containerd[1527]: time="2025-12-12T18:39:06.338923586Z" level=info msg="Container 1bd4223da0c54074bf1139391cd60fcac7364db2ab6be4d520da1a476266cb8f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:06.352375 containerd[1527]: time="2025-12-12T18:39:06.351813104Z" level=info msg="CreateContainer within sandbox \"637c14f23ad2523cf22c1ba531630fb6caf36e782173de9836869f27cc6c4a32\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1bd4223da0c54074bf1139391cd60fcac7364db2ab6be4d520da1a476266cb8f\"" Dec 12 18:39:06.353948 containerd[1527]: time="2025-12-12T18:39:06.353886657Z" level=info msg="StartContainer for \"1bd4223da0c54074bf1139391cd60fcac7364db2ab6be4d520da1a476266cb8f\"" Dec 12 18:39:06.365536 containerd[1527]: time="2025-12-12T18:39:06.365439464Z" level=info msg="connecting to shim 1bd4223da0c54074bf1139391cd60fcac7364db2ab6be4d520da1a476266cb8f" address="unix:///run/containerd/s/57f5d1ce8886374416a84d881d6a5d700312e46ba9f01cb3652262e9cae86285" protocol=ttrpc version=3 Dec 12 18:39:06.401628 containerd[1527]: time="2025-12-12T18:39:06.401391937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-j7kfc,Uid:6f16a4dd-a7d3-4e66-8476-4acb704ea3f2,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a\"" Dec 12 18:39:06.402947 kubelet[2660]: E1212 18:39:06.402775 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:06.407094 containerd[1527]: time="2025-12-12T18:39:06.407005995Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 12 18:39:06.410933 systemd-resolved[1381]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Dec 12 18:39:06.426992 systemd[1]: Started cri-containerd-1bd4223da0c54074bf1139391cd60fcac7364db2ab6be4d520da1a476266cb8f.scope - libcontainer container 1bd4223da0c54074bf1139391cd60fcac7364db2ab6be4d520da1a476266cb8f. Dec 12 18:39:06.512149 containerd[1527]: time="2025-12-12T18:39:06.512100422Z" level=info msg="StartContainer for \"1bd4223da0c54074bf1139391cd60fcac7364db2ab6be4d520da1a476266cb8f\" returns successfully" Dec 12 18:39:07.481752 kubelet[2660]: E1212 18:39:07.481630 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:07.496832 kubelet[2660]: I1212 18:39:07.496766 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vswpj" podStartSLOduration=2.496746818 podStartE2EDuration="2.496746818s" podCreationTimestamp="2025-12-12 18:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:07.495990296 +0000 UTC m=+6.333766787" watchObservedRunningTime="2025-12-12 18:39:07.496746818 +0000 UTC m=+6.334523314" Dec 12 18:39:08.186802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992823508.mount: Deactivated successfully. Dec 12 18:39:08.238526 containerd[1527]: time="2025-12-12T18:39:08.237726747Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:08.239708 containerd[1527]: time="2025-12-12T18:39:08.239665752Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Dec 12 18:39:08.240760 containerd[1527]: time="2025-12-12T18:39:08.240724080Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:08.242956 containerd[1527]: time="2025-12-12T18:39:08.242910799Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:08.243745 containerd[1527]: time="2025-12-12T18:39:08.243708733Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.836620598s" Dec 12 18:39:08.243857 containerd[1527]: time="2025-12-12T18:39:08.243842850Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 12 18:39:08.247521 containerd[1527]: time="2025-12-12T18:39:08.247076392Z" level=info msg="CreateContainer within sandbox \"d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 12 18:39:08.258571 containerd[1527]: time="2025-12-12T18:39:08.256627810Z" level=info msg="Container 5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:08.261319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046308279.mount: Deactivated successfully. Dec 12 18:39:08.271465 containerd[1527]: time="2025-12-12T18:39:08.271386692Z" level=info msg="CreateContainer within sandbox \"d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea\"" Dec 12 18:39:08.273036 containerd[1527]: time="2025-12-12T18:39:08.272962761Z" level=info msg="StartContainer for \"5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea\"" Dec 12 18:39:08.274985 containerd[1527]: time="2025-12-12T18:39:08.274911675Z" level=info msg="connecting to shim 5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea" address="unix:///run/containerd/s/1ea41c75381d4278b63f13fad76205a139abc79c30c5ebc3aa4fb0b59fb614d8" protocol=ttrpc version=3 Dec 12 18:39:08.309809 systemd[1]: Started cri-containerd-5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea.scope - libcontainer container 5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea. Dec 12 18:39:08.351179 systemd[1]: cri-containerd-5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea.scope: Deactivated successfully. Dec 12 18:39:08.357867 containerd[1527]: time="2025-12-12T18:39:08.357721161Z" level=info msg="StartContainer for \"5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea\" returns successfully" Dec 12 18:39:08.358826 containerd[1527]: time="2025-12-12T18:39:08.358718578Z" level=info msg="received container exit event container_id:\"5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea\" id:\"5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea\" pid:2992 exited_at:{seconds:1765564748 nanos:357216603}" Dec 12 18:39:08.488330 kubelet[2660]: E1212 18:39:08.486871 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:08.490340 kubelet[2660]: E1212 18:39:08.490282 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:08.492228 containerd[1527]: time="2025-12-12T18:39:08.491784735Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 12 18:39:09.047168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5be67622de4c60e9fa1ca1dbb7ce6b17439f64c2c2747abfc2f0ecfc0b9b5aea-rootfs.mount: Deactivated successfully. Dec 12 18:39:10.413770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257887221.mount: Deactivated successfully. Dec 12 18:39:10.527784 kubelet[2660]: E1212 18:39:10.527710 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:11.355117 containerd[1527]: time="2025-12-12T18:39:11.355027198Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:11.356660 containerd[1527]: time="2025-12-12T18:39:11.356609992Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 12 18:39:11.357452 containerd[1527]: time="2025-12-12T18:39:11.357385196Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:11.360435 containerd[1527]: time="2025-12-12T18:39:11.360363807Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:39:11.361622 containerd[1527]: time="2025-12-12T18:39:11.361455746Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.869589147s" Dec 12 18:39:11.361622 containerd[1527]: time="2025-12-12T18:39:11.361511623Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 12 18:39:11.364614 containerd[1527]: time="2025-12-12T18:39:11.364240901Z" level=info msg="CreateContainer within sandbox \"d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:39:11.375705 containerd[1527]: time="2025-12-12T18:39:11.374168451Z" level=info msg="Container 9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:11.376741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902353028.mount: Deactivated successfully. Dec 12 18:39:11.389994 containerd[1527]: time="2025-12-12T18:39:11.389666407Z" level=info msg="CreateContainer within sandbox \"d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7\"" Dec 12 18:39:11.392875 containerd[1527]: time="2025-12-12T18:39:11.392807132Z" level=info msg="StartContainer for \"9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7\"" Dec 12 18:39:11.396884 containerd[1527]: time="2025-12-12T18:39:11.396698221Z" level=info msg="connecting to shim 9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7" address="unix:///run/containerd/s/1ea41c75381d4278b63f13fad76205a139abc79c30c5ebc3aa4fb0b59fb614d8" protocol=ttrpc version=3 Dec 12 18:39:11.432791 systemd[1]: Started cri-containerd-9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7.scope - libcontainer container 9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7. Dec 12 18:39:11.435872 kubelet[2660]: E1212 18:39:11.435675 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:11.480189 systemd[1]: cri-containerd-9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7.scope: Deactivated successfully. Dec 12 18:39:11.484069 containerd[1527]: time="2025-12-12T18:39:11.483820089Z" level=info msg="received container exit event container_id:\"9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7\" id:\"9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7\" pid:3061 exited_at:{seconds:1765564751 nanos:480272423}" Dec 12 18:39:11.487033 containerd[1527]: time="2025-12-12T18:39:11.486981444Z" level=info msg="StartContainer for \"9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7\" returns successfully" Dec 12 18:39:11.510442 kubelet[2660]: E1212 18:39:11.510309 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:11.511457 kubelet[2660]: E1212 18:39:11.511429 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:11.518233 kubelet[2660]: E1212 18:39:11.516318 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:11.540509 kubelet[2660]: I1212 18:39:11.540350 2660 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:39:11.556757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9563824f89ccdde42a9c5d6d20033287b6aba4ee3c8834ac7ea911e5ff6ecad7-rootfs.mount: Deactivated successfully. Dec 12 18:39:11.628927 systemd[1]: Created slice kubepods-burstable-podd7233e86_b9c5_4659_9bc1_cd23246e15e6.slice - libcontainer container kubepods-burstable-podd7233e86_b9c5_4659_9bc1_cd23246e15e6.slice. Dec 12 18:39:11.646038 systemd[1]: Created slice kubepods-burstable-pod2870b4ff_cbaa_4423_a784_3cdb8cce9e6d.slice - libcontainer container kubepods-burstable-pod2870b4ff_cbaa_4423_a784_3cdb8cce9e6d.slice. Dec 12 18:39:11.728057 kubelet[2660]: I1212 18:39:11.727987 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2870b4ff-cbaa-4423-a784-3cdb8cce9e6d-config-volume\") pod \"coredns-668d6bf9bc-rnb4m\" (UID: \"2870b4ff-cbaa-4423-a784-3cdb8cce9e6d\") " pod="kube-system/coredns-668d6bf9bc-rnb4m" Dec 12 18:39:11.728731 kubelet[2660]: I1212 18:39:11.728650 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpfm\" (UniqueName: \"kubernetes.io/projected/2870b4ff-cbaa-4423-a784-3cdb8cce9e6d-kube-api-access-6vpfm\") pod \"coredns-668d6bf9bc-rnb4m\" (UID: \"2870b4ff-cbaa-4423-a784-3cdb8cce9e6d\") " pod="kube-system/coredns-668d6bf9bc-rnb4m" Dec 12 18:39:11.728931 kubelet[2660]: I1212 18:39:11.728862 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7233e86-b9c5-4659-9bc1-cd23246e15e6-config-volume\") pod \"coredns-668d6bf9bc-gzlkq\" (UID: \"d7233e86-b9c5-4659-9bc1-cd23246e15e6\") " pod="kube-system/coredns-668d6bf9bc-gzlkq" Dec 12 18:39:11.729057 kubelet[2660]: I1212 18:39:11.729011 2660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7b97\" (UniqueName: \"kubernetes.io/projected/d7233e86-b9c5-4659-9bc1-cd23246e15e6-kube-api-access-w7b97\") pod \"coredns-668d6bf9bc-gzlkq\" (UID: \"d7233e86-b9c5-4659-9bc1-cd23246e15e6\") " pod="kube-system/coredns-668d6bf9bc-gzlkq" Dec 12 18:39:11.938649 kubelet[2660]: E1212 18:39:11.938454 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:11.940366 containerd[1527]: time="2025-12-12T18:39:11.939962032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gzlkq,Uid:d7233e86-b9c5-4659-9bc1-cd23246e15e6,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:11.950929 kubelet[2660]: E1212 18:39:11.950883 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:11.952854 containerd[1527]: time="2025-12-12T18:39:11.952634463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnb4m,Uid:2870b4ff-cbaa-4423-a784-3cdb8cce9e6d,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:11.979211 containerd[1527]: time="2025-12-12T18:39:11.979143427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gzlkq,Uid:d7233e86-b9c5-4659-9bc1-cd23246e15e6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aded81898f972e0751aa747246875e224ca2d0a9c7014f13a03efb1076ce68\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 12 18:39:11.979655 kubelet[2660]: E1212 18:39:11.979435 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aded81898f972e0751aa747246875e224ca2d0a9c7014f13a03efb1076ce68\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 12 18:39:11.979655 kubelet[2660]: E1212 18:39:11.979566 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aded81898f972e0751aa747246875e224ca2d0a9c7014f13a03efb1076ce68\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-gzlkq" Dec 12 18:39:11.979655 kubelet[2660]: E1212 18:39:11.979591 2660 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aded81898f972e0751aa747246875e224ca2d0a9c7014f13a03efb1076ce68\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-gzlkq" Dec 12 18:39:11.979851 kubelet[2660]: E1212 18:39:11.979673 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gzlkq_kube-system(d7233e86-b9c5-4659-9bc1-cd23246e15e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gzlkq_kube-system(d7233e86-b9c5-4659-9bc1-cd23246e15e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24aded81898f972e0751aa747246875e224ca2d0a9c7014f13a03efb1076ce68\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-gzlkq" podUID="d7233e86-b9c5-4659-9bc1-cd23246e15e6" Dec 12 18:39:11.985732 containerd[1527]: time="2025-12-12T18:39:11.985638206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnb4m,Uid:2870b4ff-cbaa-4423-a784-3cdb8cce9e6d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fcb5b8ea89b52afae3e7d044ab8c583f9a05ee4ea079f9cc03a9d93f69b77c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 12 18:39:11.986072 kubelet[2660]: E1212 18:39:11.986033 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fcb5b8ea89b52afae3e7d044ab8c583f9a05ee4ea079f9cc03a9d93f69b77c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 12 18:39:11.986324 kubelet[2660]: E1212 18:39:11.986302 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fcb5b8ea89b52afae3e7d044ab8c583f9a05ee4ea079f9cc03a9d93f69b77c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-rnb4m" Dec 12 18:39:11.986324 kubelet[2660]: E1212 18:39:11.986361 2660 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85fcb5b8ea89b52afae3e7d044ab8c583f9a05ee4ea079f9cc03a9d93f69b77c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-rnb4m" Dec 12 18:39:11.986646 kubelet[2660]: E1212 18:39:11.986577 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rnb4m_kube-system(2870b4ff-cbaa-4423-a784-3cdb8cce9e6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rnb4m_kube-system(2870b4ff-cbaa-4423-a784-3cdb8cce9e6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85fcb5b8ea89b52afae3e7d044ab8c583f9a05ee4ea079f9cc03a9d93f69b77c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-rnb4m" podUID="2870b4ff-cbaa-4423-a784-3cdb8cce9e6d" Dec 12 18:39:12.493666 update_engine[1512]: I20251212 18:39:12.493551 1512 update_attempter.cc:509] Updating boot flags... Dec 12 18:39:12.516283 kubelet[2660]: E1212 18:39:12.516112 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:12.528828 containerd[1527]: time="2025-12-12T18:39:12.528764235Z" level=info msg="CreateContainer within sandbox \"d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 12 18:39:12.562796 containerd[1527]: time="2025-12-12T18:39:12.560716444Z" level=info msg="Container 415d24cdf19eb5f70a25f5454d788746cbfc0b7b2e36a18b631bce437057cd42: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:12.567945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3655230716.mount: Deactivated successfully. Dec 12 18:39:12.579995 containerd[1527]: time="2025-12-12T18:39:12.579943641Z" level=info msg="CreateContainer within sandbox \"d2dc358855951d1c589cb9a69d9d6a30f4b58f9e6b53a3379de28627abb7244a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"415d24cdf19eb5f70a25f5454d788746cbfc0b7b2e36a18b631bce437057cd42\"" Dec 12 18:39:12.583567 containerd[1527]: time="2025-12-12T18:39:12.582634061Z" level=info msg="StartContainer for \"415d24cdf19eb5f70a25f5454d788746cbfc0b7b2e36a18b631bce437057cd42\"" Dec 12 18:39:12.584781 containerd[1527]: time="2025-12-12T18:39:12.584739211Z" level=info msg="connecting to shim 415d24cdf19eb5f70a25f5454d788746cbfc0b7b2e36a18b631bce437057cd42" address="unix:///run/containerd/s/1ea41c75381d4278b63f13fad76205a139abc79c30c5ebc3aa4fb0b59fb614d8" protocol=ttrpc version=3 Dec 12 18:39:12.699763 systemd[1]: Started cri-containerd-415d24cdf19eb5f70a25f5454d788746cbfc0b7b2e36a18b631bce437057cd42.scope - libcontainer container 415d24cdf19eb5f70a25f5454d788746cbfc0b7b2e36a18b631bce437057cd42. Dec 12 18:39:12.997777 containerd[1527]: time="2025-12-12T18:39:12.997570347Z" level=info msg="StartContainer for \"415d24cdf19eb5f70a25f5454d788746cbfc0b7b2e36a18b631bce437057cd42\" returns successfully" Dec 12 18:39:13.523244 kubelet[2660]: E1212 18:39:13.522926 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:14.119755 systemd-networkd[1419]: flannel.1: Link UP Dec 12 18:39:14.119765 systemd-networkd[1419]: flannel.1: Gained carrier Dec 12 18:39:14.526915 kubelet[2660]: E1212 18:39:14.525296 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:15.383909 systemd-networkd[1419]: flannel.1: Gained IPv6LL Dec 12 18:39:25.415004 kubelet[2660]: E1212 18:39:25.414316 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:25.415731 containerd[1527]: time="2025-12-12T18:39:25.415697546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gzlkq,Uid:d7233e86-b9c5-4659-9bc1-cd23246e15e6,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:25.444401 systemd-networkd[1419]: cni0: Link UP Dec 12 18:39:25.444411 systemd-networkd[1419]: cni0: Gained carrier Dec 12 18:39:25.452136 systemd-networkd[1419]: cni0: Lost carrier Dec 12 18:39:25.460649 systemd-networkd[1419]: veth09c8c971: Link UP Dec 12 18:39:25.463832 kernel: cni0: port 1(veth09c8c971) entered blocking state Dec 12 18:39:25.463965 kernel: cni0: port 1(veth09c8c971) entered disabled state Dec 12 18:39:25.470859 kernel: veth09c8c971: entered allmulticast mode Dec 12 18:39:25.470990 kernel: veth09c8c971: entered promiscuous mode Dec 12 18:39:25.484788 kernel: cni0: port 1(veth09c8c971) entered blocking state Dec 12 18:39:25.484910 kernel: cni0: port 1(veth09c8c971) entered forwarding state Dec 12 18:39:25.485452 systemd-networkd[1419]: veth09c8c971: Gained carrier Dec 12 18:39:25.486120 systemd-networkd[1419]: cni0: Gained carrier Dec 12 18:39:25.495171 containerd[1527]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a48e8), "name":"cbr0", "type":"bridge"} Dec 12 18:39:25.495171 containerd[1527]: delegateAdd: netconf sent to delegate plugin: Dec 12 18:39:25.531180 containerd[1527]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-12T18:39:25.531125742Z" level=info msg="connecting to shim ba1df1108b17fb7e7a57e5654c0a60f5487904017f74fe88d3f68ad9ba569248" address="unix:///run/containerd/s/c435be04858aafe09bf9b39fd5909e3479d0fd3b6343e9fcec1f0a6a4740c651" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:25.566826 systemd[1]: Started cri-containerd-ba1df1108b17fb7e7a57e5654c0a60f5487904017f74fe88d3f68ad9ba569248.scope - libcontainer container ba1df1108b17fb7e7a57e5654c0a60f5487904017f74fe88d3f68ad9ba569248. Dec 12 18:39:25.630205 containerd[1527]: time="2025-12-12T18:39:25.630149935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gzlkq,Uid:d7233e86-b9c5-4659-9bc1-cd23246e15e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba1df1108b17fb7e7a57e5654c0a60f5487904017f74fe88d3f68ad9ba569248\"" Dec 12 18:39:25.631543 kubelet[2660]: E1212 18:39:25.631478 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:25.636027 containerd[1527]: time="2025-12-12T18:39:25.635979997Z" level=info msg="CreateContainer within sandbox \"ba1df1108b17fb7e7a57e5654c0a60f5487904017f74fe88d3f68ad9ba569248\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:39:25.647526 containerd[1527]: time="2025-12-12T18:39:25.646756977Z" level=info msg="Container 8653f215ee7a37e9f3996c215e68862497c0b2569ac823225b91e2cadd6b4aae: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:25.650472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084136391.mount: Deactivated successfully. Dec 12 18:39:25.658560 containerd[1527]: time="2025-12-12T18:39:25.658483942Z" level=info msg="CreateContainer within sandbox \"ba1df1108b17fb7e7a57e5654c0a60f5487904017f74fe88d3f68ad9ba569248\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8653f215ee7a37e9f3996c215e68862497c0b2569ac823225b91e2cadd6b4aae\"" Dec 12 18:39:25.659460 containerd[1527]: time="2025-12-12T18:39:25.659298471Z" level=info msg="StartContainer for \"8653f215ee7a37e9f3996c215e68862497c0b2569ac823225b91e2cadd6b4aae\"" Dec 12 18:39:25.660652 containerd[1527]: time="2025-12-12T18:39:25.660624026Z" level=info msg="connecting to shim 8653f215ee7a37e9f3996c215e68862497c0b2569ac823225b91e2cadd6b4aae" address="unix:///run/containerd/s/c435be04858aafe09bf9b39fd5909e3479d0fd3b6343e9fcec1f0a6a4740c651" protocol=ttrpc version=3 Dec 12 18:39:25.695810 systemd[1]: Started cri-containerd-8653f215ee7a37e9f3996c215e68862497c0b2569ac823225b91e2cadd6b4aae.scope - libcontainer container 8653f215ee7a37e9f3996c215e68862497c0b2569ac823225b91e2cadd6b4aae. Dec 12 18:39:25.745797 containerd[1527]: time="2025-12-12T18:39:25.745714245Z" level=info msg="StartContainer for \"8653f215ee7a37e9f3996c215e68862497c0b2569ac823225b91e2cadd6b4aae\" returns successfully" Dec 12 18:39:26.414181 kubelet[2660]: E1212 18:39:26.413883 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:26.414990 containerd[1527]: time="2025-12-12T18:39:26.414591248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnb4m,Uid:2870b4ff-cbaa-4423-a784-3cdb8cce9e6d,Namespace:kube-system,Attempt:0,}" Dec 12 18:39:26.436644 kernel: cni0: port 2(vethe70a9b1a) entered blocking state Dec 12 18:39:26.436834 kernel: cni0: port 2(vethe70a9b1a) entered disabled state Dec 12 18:39:26.436161 systemd-networkd[1419]: vethe70a9b1a: Link UP Dec 12 18:39:26.440559 kernel: vethe70a9b1a: entered allmulticast mode Dec 12 18:39:26.440681 kernel: vethe70a9b1a: entered promiscuous mode Dec 12 18:39:26.451637 kernel: cni0: port 2(vethe70a9b1a) entered blocking state Dec 12 18:39:26.451806 kernel: cni0: port 2(vethe70a9b1a) entered forwarding state Dec 12 18:39:26.451719 systemd-networkd[1419]: vethe70a9b1a: Gained carrier Dec 12 18:39:26.456565 containerd[1527]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e938), "name":"cbr0", "type":"bridge"} Dec 12 18:39:26.456565 containerd[1527]: delegateAdd: netconf sent to delegate plugin: Dec 12 18:39:26.506905 containerd[1527]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-12T18:39:26.506832582Z" level=info msg="connecting to shim 9213b8d8837e34676762b2a21a726c95a64f03ce33cfc875140dbe948686973b" address="unix:///run/containerd/s/bf24dd28e325319f107bbd82c9d1c09b5150136ade36ebeafcaed188ca716eb5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:39:26.521708 systemd-networkd[1419]: cni0: Gained IPv6LL Dec 12 18:39:26.556003 kubelet[2660]: E1212 18:39:26.555972 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:26.565800 systemd[1]: Started cri-containerd-9213b8d8837e34676762b2a21a726c95a64f03ce33cfc875140dbe948686973b.scope - libcontainer container 9213b8d8837e34676762b2a21a726c95a64f03ce33cfc875140dbe948686973b. Dec 12 18:39:26.576916 kubelet[2660]: I1212 18:39:26.576801 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-j7kfc" podStartSLOduration=16.620013435 podStartE2EDuration="21.57677519s" podCreationTimestamp="2025-12-12 18:39:05 +0000 UTC" firstStartedPulling="2025-12-12 18:39:06.405690843 +0000 UTC m=+5.243467325" lastFinishedPulling="2025-12-12 18:39:11.362452614 +0000 UTC m=+10.200229080" observedRunningTime="2025-12-12 18:39:13.538397272 +0000 UTC m=+12.376173738" watchObservedRunningTime="2025-12-12 18:39:26.57677519 +0000 UTC m=+25.414551681" Dec 12 18:39:26.604419 kubelet[2660]: I1212 18:39:26.604270 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gzlkq" podStartSLOduration=20.604240473 podStartE2EDuration="20.604240473s" podCreationTimestamp="2025-12-12 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:26.578234422 +0000 UTC m=+25.416010913" watchObservedRunningTime="2025-12-12 18:39:26.604240473 +0000 UTC m=+25.442016965" Dec 12 18:39:26.679678 containerd[1527]: time="2025-12-12T18:39:26.679397765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnb4m,Uid:2870b4ff-cbaa-4423-a784-3cdb8cce9e6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9213b8d8837e34676762b2a21a726c95a64f03ce33cfc875140dbe948686973b\"" Dec 12 18:39:26.681996 kubelet[2660]: E1212 18:39:26.681904 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:26.687963 containerd[1527]: time="2025-12-12T18:39:26.687845335Z" level=info msg="CreateContainer within sandbox \"9213b8d8837e34676762b2a21a726c95a64f03ce33cfc875140dbe948686973b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:39:26.706809 containerd[1527]: time="2025-12-12T18:39:26.706750225Z" level=info msg="Container 7a601cd3b08a11e8082dd5bd5123bcd42a5da9985d7555bcac5be97c32a0513d: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:39:26.716136 containerd[1527]: time="2025-12-12T18:39:26.716085547Z" level=info msg="CreateContainer within sandbox \"9213b8d8837e34676762b2a21a726c95a64f03ce33cfc875140dbe948686973b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a601cd3b08a11e8082dd5bd5123bcd42a5da9985d7555bcac5be97c32a0513d\"" Dec 12 18:39:26.717593 containerd[1527]: time="2025-12-12T18:39:26.717531308Z" level=info msg="StartContainer for \"7a601cd3b08a11e8082dd5bd5123bcd42a5da9985d7555bcac5be97c32a0513d\"" Dec 12 18:39:26.719042 containerd[1527]: time="2025-12-12T18:39:26.719002246Z" level=info msg="connecting to shim 7a601cd3b08a11e8082dd5bd5123bcd42a5da9985d7555bcac5be97c32a0513d" address="unix:///run/containerd/s/bf24dd28e325319f107bbd82c9d1c09b5150136ade36ebeafcaed188ca716eb5" protocol=ttrpc version=3 Dec 12 18:39:26.747825 systemd[1]: Started cri-containerd-7a601cd3b08a11e8082dd5bd5123bcd42a5da9985d7555bcac5be97c32a0513d.scope - libcontainer container 7a601cd3b08a11e8082dd5bd5123bcd42a5da9985d7555bcac5be97c32a0513d. Dec 12 18:39:26.797876 containerd[1527]: time="2025-12-12T18:39:26.797789789Z" level=info msg="StartContainer for \"7a601cd3b08a11e8082dd5bd5123bcd42a5da9985d7555bcac5be97c32a0513d\" returns successfully" Dec 12 18:39:27.479921 systemd-networkd[1419]: veth09c8c971: Gained IPv6LL Dec 12 18:39:27.560717 kubelet[2660]: E1212 18:39:27.560661 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:27.562978 kubelet[2660]: E1212 18:39:27.561479 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:27.603200 kubelet[2660]: I1212 18:39:27.602909 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rnb4m" podStartSLOduration=21.602880664 podStartE2EDuration="21.602880664s" podCreationTimestamp="2025-12-12 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:39:27.583673265 +0000 UTC m=+26.421449767" watchObservedRunningTime="2025-12-12 18:39:27.602880664 +0000 UTC m=+26.440657165" Dec 12 18:39:27.607726 systemd-networkd[1419]: vethe70a9b1a: Gained IPv6LL Dec 12 18:39:28.563645 kubelet[2660]: E1212 18:39:28.563531 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:28.563645 kubelet[2660]: E1212 18:39:28.563542 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:29.565784 kubelet[2660]: E1212 18:39:29.565738 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:39:45.153708 systemd[1]: Started sshd@5-134.199.209.86:22-147.75.109.163:38324.service - OpenSSH per-connection server daemon (147.75.109.163:38324). Dec 12 18:39:45.236098 sshd[3633]: Accepted publickey for core from 147.75.109.163 port 38324 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:39:45.238288 sshd-session[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:45.244557 systemd-logind[1510]: New session 6 of user core. Dec 12 18:39:45.255938 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:39:45.417395 sshd[3636]: Connection closed by 147.75.109.163 port 38324 Dec 12 18:39:45.417763 sshd-session[3633]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:45.423656 systemd[1]: sshd@5-134.199.209.86:22-147.75.109.163:38324.service: Deactivated successfully. Dec 12 18:39:45.427077 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:39:45.428719 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:39:45.431584 systemd-logind[1510]: Removed session 6. Dec 12 18:39:50.433887 systemd[1]: Started sshd@6-134.199.209.86:22-147.75.109.163:38338.service - OpenSSH per-connection server daemon (147.75.109.163:38338). Dec 12 18:39:50.505551 sshd[3671]: Accepted publickey for core from 147.75.109.163 port 38338 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:39:50.508456 sshd-session[3671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:50.517619 systemd-logind[1510]: New session 7 of user core. Dec 12 18:39:50.526883 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:39:50.672220 sshd[3674]: Connection closed by 147.75.109.163 port 38338 Dec 12 18:39:50.673151 sshd-session[3671]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:50.678452 systemd[1]: sshd@6-134.199.209.86:22-147.75.109.163:38338.service: Deactivated successfully. Dec 12 18:39:50.680852 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:39:50.682409 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:39:50.684448 systemd-logind[1510]: Removed session 7. Dec 12 18:39:55.689586 systemd[1]: Started sshd@7-134.199.209.86:22-147.75.109.163:40712.service - OpenSSH per-connection server daemon (147.75.109.163:40712). Dec 12 18:39:55.766173 sshd[3709]: Accepted publickey for core from 147.75.109.163 port 40712 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:39:55.768161 sshd-session[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:55.775996 systemd-logind[1510]: New session 8 of user core. Dec 12 18:39:55.782916 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:39:55.933193 sshd[3712]: Connection closed by 147.75.109.163 port 40712 Dec 12 18:39:55.932638 sshd-session[3709]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:55.947075 systemd[1]: sshd@7-134.199.209.86:22-147.75.109.163:40712.service: Deactivated successfully. Dec 12 18:39:55.949784 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:39:55.951363 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:39:55.954897 systemd[1]: Started sshd@8-134.199.209.86:22-147.75.109.163:40716.service - OpenSSH per-connection server daemon (147.75.109.163:40716). Dec 12 18:39:55.957300 systemd-logind[1510]: Removed session 8. Dec 12 18:39:56.030327 sshd[3725]: Accepted publickey for core from 147.75.109.163 port 40716 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:39:56.031969 sshd-session[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:56.038478 systemd-logind[1510]: New session 9 of user core. Dec 12 18:39:56.046824 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:39:56.248981 sshd[3728]: Connection closed by 147.75.109.163 port 40716 Dec 12 18:39:56.249772 sshd-session[3725]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:56.265404 systemd[1]: sshd@8-134.199.209.86:22-147.75.109.163:40716.service: Deactivated successfully. Dec 12 18:39:56.270979 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:39:56.274484 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:39:56.280912 systemd[1]: Started sshd@9-134.199.209.86:22-147.75.109.163:40724.service - OpenSSH per-connection server daemon (147.75.109.163:40724). Dec 12 18:39:56.284477 systemd-logind[1510]: Removed session 9. Dec 12 18:39:56.355398 sshd[3738]: Accepted publickey for core from 147.75.109.163 port 40724 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:39:56.357592 sshd-session[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:39:56.363603 systemd-logind[1510]: New session 10 of user core. Dec 12 18:39:56.371866 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:39:56.518442 sshd[3741]: Connection closed by 147.75.109.163 port 40724 Dec 12 18:39:56.519342 sshd-session[3738]: pam_unix(sshd:session): session closed for user core Dec 12 18:39:56.530325 systemd[1]: sshd@9-134.199.209.86:22-147.75.109.163:40724.service: Deactivated successfully. Dec 12 18:39:56.540073 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:39:56.542293 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:39:56.545031 systemd-logind[1510]: Removed session 10. Dec 12 18:40:01.537991 systemd[1]: Started sshd@10-134.199.209.86:22-147.75.109.163:40738.service - OpenSSH per-connection server daemon (147.75.109.163:40738). Dec 12 18:40:01.621785 sshd[3776]: Accepted publickey for core from 147.75.109.163 port 40738 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:01.624253 sshd-session[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:01.633356 systemd-logind[1510]: New session 11 of user core. Dec 12 18:40:01.643372 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:40:01.847844 sshd[3779]: Connection closed by 147.75.109.163 port 40738 Dec 12 18:40:01.848652 sshd-session[3776]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:01.856167 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:40:01.856340 systemd[1]: sshd@10-134.199.209.86:22-147.75.109.163:40738.service: Deactivated successfully. Dec 12 18:40:01.861017 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:40:01.864717 systemd-logind[1510]: Removed session 11. Dec 12 18:40:06.872775 systemd[1]: Started sshd@11-134.199.209.86:22-147.75.109.163:56556.service - OpenSSH per-connection server daemon (147.75.109.163:56556). Dec 12 18:40:06.981465 sshd[3815]: Accepted publickey for core from 147.75.109.163 port 56556 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:06.983617 sshd-session[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:06.991984 systemd-logind[1510]: New session 12 of user core. Dec 12 18:40:07.005925 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:40:07.150369 sshd[3818]: Connection closed by 147.75.109.163 port 56556 Dec 12 18:40:07.151739 sshd-session[3815]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:07.167123 systemd[1]: sshd@11-134.199.209.86:22-147.75.109.163:56556.service: Deactivated successfully. Dec 12 18:40:07.169459 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:40:07.170604 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:40:07.175863 systemd[1]: Started sshd@12-134.199.209.86:22-147.75.109.163:56558.service - OpenSSH per-connection server daemon (147.75.109.163:56558). Dec 12 18:40:07.177678 systemd-logind[1510]: Removed session 12. Dec 12 18:40:07.261033 sshd[3830]: Accepted publickey for core from 147.75.109.163 port 56558 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:07.263366 sshd-session[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:07.270586 systemd-logind[1510]: New session 13 of user core. Dec 12 18:40:07.275737 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:40:07.527278 sshd[3833]: Connection closed by 147.75.109.163 port 56558 Dec 12 18:40:07.527948 sshd-session[3830]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:07.541794 systemd[1]: sshd@12-134.199.209.86:22-147.75.109.163:56558.service: Deactivated successfully. Dec 12 18:40:07.544846 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:40:07.546195 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:40:07.551808 systemd[1]: Started sshd@13-134.199.209.86:22-147.75.109.163:56572.service - OpenSSH per-connection server daemon (147.75.109.163:56572). Dec 12 18:40:07.552640 systemd-logind[1510]: Removed session 13. Dec 12 18:40:07.617283 sshd[3843]: Accepted publickey for core from 147.75.109.163 port 56572 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:07.619242 sshd-session[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:07.627183 systemd-logind[1510]: New session 14 of user core. Dec 12 18:40:07.636787 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:40:08.486130 sshd[3846]: Connection closed by 147.75.109.163 port 56572 Dec 12 18:40:08.486617 sshd-session[3843]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:08.505174 systemd[1]: sshd@13-134.199.209.86:22-147.75.109.163:56572.service: Deactivated successfully. Dec 12 18:40:08.509456 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:40:08.512706 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:40:08.521468 systemd[1]: Started sshd@14-134.199.209.86:22-147.75.109.163:56584.service - OpenSSH per-connection server daemon (147.75.109.163:56584). Dec 12 18:40:08.528683 systemd-logind[1510]: Removed session 14. Dec 12 18:40:08.610388 sshd[3863]: Accepted publickey for core from 147.75.109.163 port 56584 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:08.612552 sshd-session[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:08.619271 systemd-logind[1510]: New session 15 of user core. Dec 12 18:40:08.627993 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:40:08.963636 sshd[3866]: Connection closed by 147.75.109.163 port 56584 Dec 12 18:40:08.964107 sshd-session[3863]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:08.982836 systemd[1]: sshd@14-134.199.209.86:22-147.75.109.163:56584.service: Deactivated successfully. Dec 12 18:40:08.987854 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:40:08.990010 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:40:08.999215 systemd[1]: Started sshd@15-134.199.209.86:22-147.75.109.163:56588.service - OpenSSH per-connection server daemon (147.75.109.163:56588). Dec 12 18:40:09.002480 systemd-logind[1510]: Removed session 15. Dec 12 18:40:09.083591 sshd[3876]: Accepted publickey for core from 147.75.109.163 port 56588 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:09.086540 sshd-session[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:09.095031 systemd-logind[1510]: New session 16 of user core. Dec 12 18:40:09.108852 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:40:09.259729 sshd[3879]: Connection closed by 147.75.109.163 port 56588 Dec 12 18:40:09.260706 sshd-session[3876]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:09.264661 systemd[1]: sshd@15-134.199.209.86:22-147.75.109.163:56588.service: Deactivated successfully. Dec 12 18:40:09.270078 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:40:09.276837 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:40:09.279173 systemd-logind[1510]: Removed session 16. Dec 12 18:40:14.277205 systemd[1]: Started sshd@16-134.199.209.86:22-147.75.109.163:42228.service - OpenSSH per-connection server daemon (147.75.109.163:42228). Dec 12 18:40:14.352134 sshd[3913]: Accepted publickey for core from 147.75.109.163 port 42228 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:14.353773 sshd-session[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:14.360097 systemd-logind[1510]: New session 17 of user core. Dec 12 18:40:14.367796 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:40:14.414159 kubelet[2660]: E1212 18:40:14.414080 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:14.499004 sshd[3922]: Connection closed by 147.75.109.163 port 42228 Dec 12 18:40:14.499988 sshd-session[3913]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:14.506933 systemd[1]: sshd@16-134.199.209.86:22-147.75.109.163:42228.service: Deactivated successfully. Dec 12 18:40:14.509616 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:40:14.511533 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:40:14.513339 systemd-logind[1510]: Removed session 17. Dec 12 18:40:19.519611 systemd[1]: Started sshd@17-134.199.209.86:22-147.75.109.163:42230.service - OpenSSH per-connection server daemon (147.75.109.163:42230). Dec 12 18:40:19.595992 sshd[3955]: Accepted publickey for core from 147.75.109.163 port 42230 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:19.597427 sshd-session[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:19.607176 systemd-logind[1510]: New session 18 of user core. Dec 12 18:40:19.610804 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:40:19.750142 sshd[3970]: Connection closed by 147.75.109.163 port 42230 Dec 12 18:40:19.752777 sshd-session[3955]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:19.758670 systemd[1]: sshd@17-134.199.209.86:22-147.75.109.163:42230.service: Deactivated successfully. Dec 12 18:40:19.761403 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:40:19.762848 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:40:19.764948 systemd-logind[1510]: Removed session 18. Dec 12 18:40:24.413794 kubelet[2660]: E1212 18:40:24.413728 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:24.769883 systemd[1]: Started sshd@18-134.199.209.86:22-147.75.109.163:33556.service - OpenSSH per-connection server daemon (147.75.109.163:33556). Dec 12 18:40:24.845955 sshd[4006]: Accepted publickey for core from 147.75.109.163 port 33556 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:24.848255 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:24.855162 systemd-logind[1510]: New session 19 of user core. Dec 12 18:40:24.859733 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:40:25.014721 sshd[4009]: Connection closed by 147.75.109.163 port 33556 Dec 12 18:40:25.015708 sshd-session[4006]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:25.022613 systemd[1]: sshd@18-134.199.209.86:22-147.75.109.163:33556.service: Deactivated successfully. Dec 12 18:40:25.026071 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:40:25.028208 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:40:25.031066 systemd-logind[1510]: Removed session 19. Dec 12 18:40:27.416779 kubelet[2660]: E1212 18:40:27.416723 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Dec 12 18:40:30.035122 systemd[1]: Started sshd@19-134.199.209.86:22-147.75.109.163:33570.service - OpenSSH per-connection server daemon (147.75.109.163:33570). Dec 12 18:40:30.128966 sshd[4042]: Accepted publickey for core from 147.75.109.163 port 33570 ssh2: RSA SHA256:GRQL0eALjfXZL9nnc74Wl3SaxeVaiPCxC4C6IH1H/CM Dec 12 18:40:30.130854 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:40:30.139737 systemd-logind[1510]: New session 20 of user core. Dec 12 18:40:30.153979 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:40:30.352407 sshd[4045]: Connection closed by 147.75.109.163 port 33570 Dec 12 18:40:30.353995 sshd-session[4042]: pam_unix(sshd:session): session closed for user core Dec 12 18:40:30.363733 systemd[1]: sshd@19-134.199.209.86:22-147.75.109.163:33570.service: Deactivated successfully. Dec 12 18:40:30.367023 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:40:30.369570 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:40:30.373071 systemd-logind[1510]: Removed session 20. Dec 12 18:40:31.414303 kubelet[2660]: E1212 18:40:31.414237 2660 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2"