Dec 16 13:21:59.027015 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 13:21:59.027085 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:21:59.027099 kernel: BIOS-provided physical RAM map: Dec 16 13:21:59.027110 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 16 13:21:59.027120 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 16 13:21:59.027129 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 13:21:59.027145 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 16 13:21:59.027155 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 16 13:21:59.027165 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 13:21:59.027174 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 13:21:59.027184 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 13:21:59.027193 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 13:21:59.027202 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 16 13:21:59.027212 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 13:21:59.027227 kernel: NX (Execute Disable) protection: active Dec 16 13:21:59.027238 kernel: APIC: Static calls initialized Dec 16 13:21:59.027247 kernel: SMBIOS 2.8 present. Dec 16 13:21:59.027258 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 16 13:21:59.027268 kernel: DMI: Memory slots populated: 1/1 Dec 16 13:21:59.027277 kernel: Hypervisor detected: KVM Dec 16 13:21:59.027290 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 16 13:21:59.027300 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 13:21:59.027310 kernel: kvm-clock: using sched offset of 7690930705 cycles Dec 16 13:21:59.027321 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 13:21:59.027332 kernel: tsc: Detected 1999.999 MHz processor Dec 16 13:21:59.027343 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 13:21:59.027354 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 13:21:59.027365 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 16 13:21:59.027375 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 13:21:59.027386 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 13:21:59.027400 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 16 13:21:59.027410 kernel: Using GB pages for direct mapping Dec 16 13:21:59.027420 kernel: ACPI: Early table checksum verification disabled Dec 16 13:21:59.027431 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 16 13:21:59.027441 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:21:59.027452 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:21:59.027463 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:21:59.027474 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 16 13:21:59.027484 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:21:59.027498 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:21:59.027513 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:21:59.027524 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 13:21:59.027535 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 16 13:21:59.027546 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 16 13:21:59.027560 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 16 13:21:59.027571 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 16 13:21:59.027582 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 16 13:21:59.027593 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 16 13:21:59.027604 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 16 13:21:59.027615 kernel: No NUMA configuration found Dec 16 13:21:59.027625 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 16 13:21:59.027636 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Dec 16 13:21:59.027646 kernel: Zone ranges: Dec 16 13:21:59.027660 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 13:21:59.027670 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 13:21:59.027681 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 16 13:21:59.027691 kernel: Device empty Dec 16 13:21:59.027702 kernel: Movable zone start for each node Dec 16 13:21:59.027712 kernel: Early memory node ranges Dec 16 13:21:59.027723 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 13:21:59.027733 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 16 13:21:59.027745 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 16 13:21:59.027756 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 16 13:21:59.027771 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 13:21:59.027782 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 13:21:59.027793 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 16 13:21:59.027804 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 13:21:59.027814 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 13:21:59.027825 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 13:21:59.027836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 13:21:59.027847 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 13:21:59.027858 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 13:21:59.027872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 13:21:59.027883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 13:21:59.027893 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 13:21:59.027904 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 13:21:59.027916 kernel: TSC deadline timer available Dec 16 13:21:59.027927 kernel: CPU topo: Max. logical packages: 1 Dec 16 13:21:59.027938 kernel: CPU topo: Max. logical dies: 1 Dec 16 13:21:59.027948 kernel: CPU topo: Max. dies per package: 1 Dec 16 13:21:59.027959 kernel: CPU topo: Max. threads per core: 1 Dec 16 13:21:59.027974 kernel: CPU topo: Num. cores per package: 2 Dec 16 13:21:59.027984 kernel: CPU topo: Num. threads per package: 2 Dec 16 13:21:59.027995 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 13:21:59.028005 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 13:21:59.028016 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 13:21:59.028026 kernel: kvm-guest: setup PV sched yield Dec 16 13:21:59.028053 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 13:21:59.028064 kernel: Booting paravirtualized kernel on KVM Dec 16 13:21:59.028075 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 13:21:59.028089 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 13:21:59.028100 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 13:21:59.028110 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 13:21:59.028121 kernel: pcpu-alloc: [0] 0 1 Dec 16 13:21:59.028131 kernel: kvm-guest: PV spinlocks enabled Dec 16 13:21:59.028142 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 13:21:59.028154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:21:59.028166 kernel: random: crng init done Dec 16 13:21:59.028180 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 13:21:59.028191 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 13:21:59.028202 kernel: Fallback order for Node 0: 0 Dec 16 13:21:59.028213 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 16 13:21:59.028223 kernel: Policy zone: Normal Dec 16 13:21:59.028234 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 13:21:59.028245 kernel: software IO TLB: area num 2. Dec 16 13:21:59.028256 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 13:21:59.028267 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 13:21:59.028278 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 13:21:59.028292 kernel: Dynamic Preempt: voluntary Dec 16 13:21:59.028303 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 13:21:59.028315 kernel: rcu: RCU event tracing is enabled. Dec 16 13:21:59.028326 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 13:21:59.028337 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 13:21:59.028348 kernel: Rude variant of Tasks RCU enabled. Dec 16 13:21:59.028359 kernel: Tracing variant of Tasks RCU enabled. Dec 16 13:21:59.028369 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 13:21:59.028380 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 13:21:59.028394 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:21:59.028416 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:21:59.028431 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 13:21:59.028442 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 13:21:59.028454 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 13:21:59.028465 kernel: Console: colour VGA+ 80x25 Dec 16 13:21:59.028476 kernel: printk: legacy console [tty0] enabled Dec 16 13:21:59.028488 kernel: printk: legacy console [ttyS0] enabled Dec 16 13:21:59.028499 kernel: ACPI: Core revision 20240827 Dec 16 13:21:59.028514 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 13:21:59.028526 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 13:21:59.028537 kernel: x2apic enabled Dec 16 13:21:59.028548 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 13:21:59.028559 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 13:21:59.028570 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 13:21:59.028581 kernel: kvm-guest: setup PV IPIs Dec 16 13:21:59.028595 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 13:21:59.028607 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Dec 16 13:21:59.028618 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Dec 16 13:21:59.028630 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 13:21:59.028641 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 13:21:59.028653 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 13:21:59.028665 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 13:21:59.028676 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 13:21:59.028687 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 13:21:59.028701 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 13:21:59.028711 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 13:21:59.028722 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 13:21:59.028734 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 13:21:59.028747 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 13:21:59.028758 kernel: active return thunk: srso_alias_return_thunk Dec 16 13:21:59.028770 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 13:21:59.028782 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 16 13:21:59.028796 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 16 13:21:59.028808 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 13:21:59.028820 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 13:21:59.028831 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 13:21:59.028842 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 16 13:21:59.028854 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 13:21:59.028865 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 16 13:21:59.028877 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 16 13:21:59.028888 kernel: Freeing SMP alternatives memory: 32K Dec 16 13:21:59.028904 kernel: pid_max: default: 32768 minimum: 301 Dec 16 13:21:59.028915 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 13:21:59.028926 kernel: landlock: Up and running. Dec 16 13:21:59.028938 kernel: SELinux: Initializing. Dec 16 13:21:59.028949 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:21:59.028961 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 13:21:59.028972 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 16 13:21:59.028984 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 13:21:59.028995 kernel: ... version: 0 Dec 16 13:21:59.029010 kernel: ... bit width: 48 Dec 16 13:21:59.029021 kernel: ... generic registers: 6 Dec 16 13:21:59.046766 kernel: ... value mask: 0000ffffffffffff Dec 16 13:21:59.046789 kernel: ... max period: 00007fffffffffff Dec 16 13:21:59.046798 kernel: ... fixed-purpose events: 0 Dec 16 13:21:59.046806 kernel: ... event mask: 000000000000003f Dec 16 13:21:59.046814 kernel: signal: max sigframe size: 3376 Dec 16 13:21:59.046821 kernel: rcu: Hierarchical SRCU implementation. Dec 16 13:21:59.046830 kernel: rcu: Max phase no-delay instances is 400. Dec 16 13:21:59.046856 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 13:21:59.046865 kernel: smp: Bringing up secondary CPUs ... Dec 16 13:21:59.046873 kernel: smpboot: x86: Booting SMP configuration: Dec 16 13:21:59.046880 kernel: .... node #0, CPUs: #1 Dec 16 13:21:59.046888 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 13:21:59.046895 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Dec 16 13:21:59.046904 kernel: Memory: 3952856K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235488K reserved, 0K cma-reserved) Dec 16 13:21:59.046912 kernel: devtmpfs: initialized Dec 16 13:21:59.046920 kernel: x86/mm: Memory block size: 128MB Dec 16 13:21:59.046930 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 13:21:59.046938 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 13:21:59.046946 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 13:21:59.046954 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 13:21:59.046961 kernel: audit: initializing netlink subsys (disabled) Dec 16 13:21:59.046969 kernel: audit: type=2000 audit(1765891316.902:1): state=initialized audit_enabled=0 res=1 Dec 16 13:21:59.046977 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 13:21:59.046984 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 13:21:59.046992 kernel: cpuidle: using governor menu Dec 16 13:21:59.047002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 13:21:59.047009 kernel: dca service started, version 1.12.1 Dec 16 13:21:59.047017 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 13:21:59.047025 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 13:21:59.047047 kernel: PCI: Using configuration type 1 for base access Dec 16 13:21:59.047055 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 13:21:59.047063 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 13:21:59.047071 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 13:21:59.047078 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 13:21:59.047089 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 13:21:59.047096 kernel: ACPI: Added _OSI(Module Device) Dec 16 13:21:59.047104 kernel: ACPI: Added _OSI(Processor Device) Dec 16 13:21:59.047111 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 13:21:59.047119 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 13:21:59.047126 kernel: ACPI: Interpreter enabled Dec 16 13:21:59.047134 kernel: ACPI: PM: (supports S0 S3 S5) Dec 16 13:21:59.047141 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 13:21:59.047149 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 13:21:59.047159 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 13:21:59.047167 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 13:21:59.047175 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 13:21:59.047471 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 13:21:59.047673 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 13:21:59.047867 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 13:21:59.047882 kernel: PCI host bridge to bus 0000:00 Dec 16 13:21:59.048012 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 13:21:59.048165 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 13:21:59.048279 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 13:21:59.048389 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 13:21:59.048503 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 13:21:59.048642 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 16 13:21:59.048754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 13:21:59.048914 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 13:21:59.049089 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 13:21:59.049222 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 16 13:21:59.049343 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 16 13:21:59.049462 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 16 13:21:59.049662 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 13:21:59.049857 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 16 13:21:59.050062 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 16 13:21:59.050240 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 16 13:21:59.050423 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 16 13:21:59.050615 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 13:21:59.050828 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 16 13:21:59.050988 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 16 13:21:59.051165 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 16 13:21:59.051290 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 16 13:21:59.051422 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 13:21:59.051541 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 13:21:59.051667 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 13:21:59.051786 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 16 13:21:59.051903 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 16 13:21:59.052049 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 13:21:59.052176 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 13:21:59.052186 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 13:21:59.052193 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 13:21:59.052201 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 13:21:59.052208 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 13:21:59.052216 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 13:21:59.052223 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 13:21:59.052234 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 13:21:59.052241 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 13:21:59.052248 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 13:21:59.052255 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 13:21:59.052262 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 13:21:59.052269 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 13:21:59.052277 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 13:21:59.052284 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 13:21:59.052291 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 13:21:59.052300 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 13:21:59.052307 kernel: iommu: Default domain type: Translated Dec 16 13:21:59.052314 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 13:21:59.052321 kernel: PCI: Using ACPI for IRQ routing Dec 16 13:21:59.052328 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 13:21:59.052336 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 16 13:21:59.052343 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 16 13:21:59.052462 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 13:21:59.052584 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 13:21:59.052705 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 13:21:59.052715 kernel: vgaarb: loaded Dec 16 13:21:59.052722 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 13:21:59.052730 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 13:21:59.052737 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 13:21:59.052744 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 13:21:59.052751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 13:21:59.052759 kernel: pnp: PnP ACPI init Dec 16 13:21:59.052936 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 13:21:59.052952 kernel: pnp: PnP ACPI: found 5 devices Dec 16 13:21:59.052961 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 13:21:59.052968 kernel: NET: Registered PF_INET protocol family Dec 16 13:21:59.052976 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 13:21:59.052984 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 13:21:59.052991 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 13:21:59.052999 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 13:21:59.053011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 13:21:59.053018 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 13:21:59.053025 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:21:59.053052 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 13:21:59.053061 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 13:21:59.053068 kernel: NET: Registered PF_XDP protocol family Dec 16 13:21:59.053187 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 13:21:59.053299 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 13:21:59.053409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 13:21:59.053523 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 13:21:59.053632 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 13:21:59.053741 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 16 13:21:59.053750 kernel: PCI: CLS 0 bytes, default 64 Dec 16 13:21:59.053758 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 13:21:59.053765 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 16 13:21:59.053773 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Dec 16 13:21:59.053780 kernel: Initialise system trusted keyrings Dec 16 13:21:59.053791 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 13:21:59.053798 kernel: Key type asymmetric registered Dec 16 13:21:59.053805 kernel: Asymmetric key parser 'x509' registered Dec 16 13:21:59.053812 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 13:21:59.053819 kernel: io scheduler mq-deadline registered Dec 16 13:21:59.053826 kernel: io scheduler kyber registered Dec 16 13:21:59.053834 kernel: io scheduler bfq registered Dec 16 13:21:59.053841 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 13:21:59.053849 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 13:21:59.053859 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 13:21:59.053866 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 13:21:59.053873 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 13:21:59.053881 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 13:21:59.053888 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 13:21:59.053895 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 13:21:59.054063 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 13:21:59.054076 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 13:21:59.054197 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 13:21:59.054317 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T13:21:58 UTC (1765891318) Dec 16 13:21:59.054463 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 13:21:59.054476 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 13:21:59.054483 kernel: NET: Registered PF_INET6 protocol family Dec 16 13:21:59.054491 kernel: Segment Routing with IPv6 Dec 16 13:21:59.054498 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 13:21:59.054505 kernel: NET: Registered PF_PACKET protocol family Dec 16 13:21:59.054512 kernel: Key type dns_resolver registered Dec 16 13:21:59.054523 kernel: IPI shorthand broadcast: enabled Dec 16 13:21:59.054531 kernel: sched_clock: Marking stable (2946008066, 354258783)->(3517188988, -216922139) Dec 16 13:21:59.054538 kernel: registered taskstats version 1 Dec 16 13:21:59.054545 kernel: Loading compiled-in X.509 certificates Dec 16 13:21:59.054553 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 13:21:59.054560 kernel: Demotion targets for Node 0: null Dec 16 13:21:59.054567 kernel: Key type .fscrypt registered Dec 16 13:21:59.054574 kernel: Key type fscrypt-provisioning registered Dec 16 13:21:59.054581 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 13:21:59.054591 kernel: ima: Allocated hash algorithm: sha1 Dec 16 13:21:59.054598 kernel: ima: No architecture policies found Dec 16 13:21:59.054605 kernel: clk: Disabling unused clocks Dec 16 13:21:59.054613 kernel: Warning: unable to open an initial console. Dec 16 13:21:59.054620 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 13:21:59.054628 kernel: Write protecting the kernel read-only data: 40960k Dec 16 13:21:59.054635 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 13:21:59.054642 kernel: Run /init as init process Dec 16 13:21:59.054649 kernel: with arguments: Dec 16 13:21:59.054659 kernel: /init Dec 16 13:21:59.054666 kernel: with environment: Dec 16 13:21:59.054688 kernel: HOME=/ Dec 16 13:21:59.054698 kernel: TERM=linux Dec 16 13:21:59.054706 systemd[1]: Successfully made /usr/ read-only. Dec 16 13:21:59.054717 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:21:59.054726 systemd[1]: Detected virtualization kvm. Dec 16 13:21:59.054736 systemd[1]: Detected architecture x86-64. Dec 16 13:21:59.054743 systemd[1]: Running in initrd. Dec 16 13:21:59.054751 systemd[1]: No hostname configured, using default hostname. Dec 16 13:21:59.054759 systemd[1]: Hostname set to . Dec 16 13:21:59.054766 systemd[1]: Initializing machine ID from random generator. Dec 16 13:21:59.054774 systemd[1]: Queued start job for default target initrd.target. Dec 16 13:21:59.054782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:21:59.054790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:21:59.054798 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 13:21:59.054808 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:21:59.054816 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 13:21:59.054825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 13:21:59.054834 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 13:21:59.054842 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 13:21:59.054850 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:21:59.054860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:21:59.054868 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:21:59.054876 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:21:59.054883 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:21:59.054891 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:21:59.054899 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:21:59.054907 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:21:59.054915 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 13:21:59.054923 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 13:21:59.054933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:21:59.054941 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:21:59.054951 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:21:59.054959 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:21:59.054967 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 13:21:59.054977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:21:59.054985 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 13:21:59.054993 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 13:21:59.055001 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 13:21:59.055009 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:21:59.055017 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:21:59.055025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:21:59.055114 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 13:21:59.055132 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:21:59.055184 systemd-journald[187]: Collecting audit messages is disabled. Dec 16 13:21:59.055209 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 13:21:59.055217 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 13:21:59.055226 systemd-journald[187]: Journal started Dec 16 13:21:59.055243 systemd-journald[187]: Runtime Journal (/run/log/journal/b05fb08218304efb8c2810eeaa785609) is 8M, max 78.2M, 70.2M free. Dec 16 13:21:58.982307 systemd-modules-load[188]: Inserted module 'overlay' Dec 16 13:21:59.165622 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:21:59.165663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 13:21:59.165684 kernel: Bridge firewalling registered Dec 16 13:21:59.080639 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 16 13:21:59.165134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:21:59.166499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:21:59.168085 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 13:21:59.172181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 13:21:59.175493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:21:59.179191 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:21:59.188143 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:21:59.222487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:21:59.223874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:21:59.231687 systemd-tmpfiles[205]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 13:21:59.233454 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:21:59.237253 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 13:21:59.239218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:21:59.243254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:21:59.263260 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 13:21:59.294678 systemd-resolved[226]: Positive Trust Anchors: Dec 16 13:21:59.295638 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:21:59.295668 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:21:59.299512 systemd-resolved[226]: Defaulting to hostname 'linux'. Dec 16 13:21:59.303726 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:21:59.305130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:21:59.389167 kernel: SCSI subsystem initialized Dec 16 13:21:59.399070 kernel: Loading iSCSI transport class v2.0-870. Dec 16 13:21:59.412081 kernel: iscsi: registered transport (tcp) Dec 16 13:21:59.435200 kernel: iscsi: registered transport (qla4xxx) Dec 16 13:21:59.435275 kernel: QLogic iSCSI HBA Driver Dec 16 13:21:59.463612 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:21:59.481895 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:21:59.484880 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:21:59.543094 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 13:21:59.545331 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 13:21:59.602085 kernel: raid6: avx2x4 gen() 30953 MB/s Dec 16 13:21:59.620073 kernel: raid6: avx2x2 gen() 30065 MB/s Dec 16 13:21:59.638203 kernel: raid6: avx2x1 gen() 21300 MB/s Dec 16 13:21:59.638245 kernel: raid6: using algorithm avx2x4 gen() 30953 MB/s Dec 16 13:21:59.660564 kernel: raid6: .... xor() 5374 MB/s, rmw enabled Dec 16 13:21:59.660641 kernel: raid6: using avx2x2 recovery algorithm Dec 16 13:21:59.797074 kernel: xor: automatically using best checksumming function avx Dec 16 13:22:00.059100 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 13:22:00.068026 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:22:00.071476 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:22:00.110183 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 16 13:22:00.119190 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:22:00.124592 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 13:22:00.161288 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Dec 16 13:22:00.197406 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:22:00.201871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:22:00.306339 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:22:00.312348 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 13:22:00.404139 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 16 13:22:00.426075 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 13:22:00.979067 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 13:22:01.031267 kernel: scsi host0: Virtio SCSI HBA Dec 16 13:22:01.044079 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 13:22:01.136007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:22:01.210834 kernel: libata version 3.00 loaded. Dec 16 13:22:01.193146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:22:01.215507 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:22:01.220343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:22:01.230466 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:22:01.274079 kernel: AES CTR mode by8 optimization enabled Dec 16 13:22:01.291087 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 13:22:01.291427 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 13:22:01.303072 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 13:22:01.303404 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 13:22:01.303640 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 13:22:01.363987 kernel: scsi host1: ahci Dec 16 13:22:01.364296 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 16 13:22:01.364546 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 16 13:22:01.364759 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 13:22:01.364973 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 13:22:01.365211 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 13:22:01.365422 kernel: scsi host2: ahci Dec 16 13:22:01.401121 kernel: scsi host3: ahci Dec 16 13:22:01.405556 kernel: scsi host4: ahci Dec 16 13:22:01.406168 kernel: scsi host5: ahci Dec 16 13:22:01.415106 kernel: scsi host6: ahci Dec 16 13:22:01.415536 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Dec 16 13:22:01.415562 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Dec 16 13:22:01.415584 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Dec 16 13:22:01.415600 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Dec 16 13:22:01.415615 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Dec 16 13:22:01.415629 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Dec 16 13:22:01.415655 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 13:22:01.415685 kernel: GPT:9289727 != 167739391 Dec 16 13:22:01.415708 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 13:22:01.415727 kernel: GPT:9289727 != 167739391 Dec 16 13:22:01.415748 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 13:22:01.415764 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:22:01.415781 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 13:22:01.557134 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:22:01.730675 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 13:22:01.730753 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 13:22:01.731058 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 13:22:01.737290 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 13:22:01.740076 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 13:22:01.747635 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 13:22:01.965722 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 13:22:01.992732 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 13:22:02.014572 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 13:22:02.021212 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 13:22:02.066581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 13:22:02.082050 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 13:22:02.097004 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:22:02.099353 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:22:02.106279 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:22:02.109840 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 13:22:02.115090 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 13:22:02.151031 disk-uuid[620]: Primary Header is updated. Dec 16 13:22:02.151031 disk-uuid[620]: Secondary Entries is updated. Dec 16 13:22:02.151031 disk-uuid[620]: Secondary Header is updated. Dec 16 13:22:02.185625 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:22:02.265086 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:22:02.301310 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:22:03.289094 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 13:22:03.289167 disk-uuid[623]: The operation has completed successfully. Dec 16 13:22:03.416208 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 13:22:03.416369 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 13:22:03.457646 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 13:22:03.484015 sh[642]: Success Dec 16 13:22:03.512280 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 13:22:03.512365 kernel: device-mapper: uevent: version 1.0.3 Dec 16 13:22:03.514743 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 13:22:03.535377 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 16 13:22:03.666649 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 13:22:03.675204 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 13:22:03.689927 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 13:22:03.721078 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (654) Dec 16 13:22:03.732982 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 13:22:03.733072 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:22:03.757121 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 13:22:03.757204 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 13:22:03.761607 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 13:22:03.766364 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 13:22:03.771025 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:22:03.773552 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 13:22:03.777191 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 13:22:03.782225 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 13:22:03.847121 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (695) Dec 16 13:22:03.856317 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:22:03.856409 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:22:03.871800 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:22:03.871886 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:22:03.871905 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:22:03.884068 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:22:03.887504 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 13:22:03.890315 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 13:22:04.018388 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:22:04.059929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:22:04.686112 systemd-networkd[823]: lo: Link UP Dec 16 13:22:04.686126 systemd-networkd[823]: lo: Gained carrier Dec 16 13:22:04.688341 systemd-networkd[823]: Enumeration completed Dec 16 13:22:04.689162 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:22:04.689755 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:22:04.689761 systemd-networkd[823]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:22:04.724009 systemd-networkd[823]: eth0: Link UP Dec 16 13:22:04.724262 systemd-networkd[823]: eth0: Gained carrier Dec 16 13:22:04.724280 systemd-networkd[823]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:22:04.726489 systemd[1]: Reached target network.target - Network. Dec 16 13:22:05.073816 ignition[752]: Ignition 2.22.0 Dec 16 13:22:05.073834 ignition[752]: Stage: fetch-offline Dec 16 13:22:05.073881 ignition[752]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:22:05.073891 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:22:05.076529 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:22:05.074003 ignition[752]: parsed url from cmdline: "" Dec 16 13:22:05.074007 ignition[752]: no config URL provided Dec 16 13:22:05.074013 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:22:05.074022 ignition[752]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:22:05.080199 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 13:22:05.074027 ignition[752]: failed to fetch config: resource requires networking Dec 16 13:22:05.074568 ignition[752]: Ignition finished successfully Dec 16 13:22:05.279642 ignition[831]: Ignition 2.22.0 Dec 16 13:22:05.279663 ignition[831]: Stage: fetch Dec 16 13:22:05.279898 ignition[831]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:22:05.279916 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:22:05.280080 ignition[831]: parsed url from cmdline: "" Dec 16 13:22:05.280087 ignition[831]: no config URL provided Dec 16 13:22:05.280097 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 13:22:05.280112 ignition[831]: no config at "/usr/lib/ignition/user.ign" Dec 16 13:22:05.280174 ignition[831]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 16 13:22:05.280538 ignition[831]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:22:05.480769 ignition[831]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 16 13:22:05.481266 ignition[831]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:22:05.881363 ignition[831]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 16 13:22:05.881531 ignition[831]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:22:06.535271 systemd-networkd[823]: eth0: Gained IPv6LL Dec 16 13:22:06.682628 ignition[831]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 16 13:22:06.682841 ignition[831]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 13:22:08.027184 systemd-networkd[823]: eth0: DHCPv4 address 172.236.100.113/24, gateway 172.236.100.1 acquired from 23.192.120.212 Dec 16 13:22:08.283885 ignition[831]: PUT http://169.254.169.254/v1/token: attempt #5 Dec 16 13:22:08.386063 ignition[831]: PUT result: OK Dec 16 13:22:08.386204 ignition[831]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 16 13:22:08.500117 ignition[831]: GET result: OK Dec 16 13:22:08.500230 ignition[831]: parsing config with SHA512: f55c7071e36457652be73c539183a9e88deb6a9d94513d982770b684d41007b986ad818233f9195f35e643494607313c37dd0b8d1ece73a5226ded9045f75e2c Dec 16 13:22:08.505877 unknown[831]: fetched base config from "system" Dec 16 13:22:08.505888 unknown[831]: fetched base config from "system" Dec 16 13:22:08.506984 ignition[831]: fetch: fetch complete Dec 16 13:22:08.505899 unknown[831]: fetched user config from "akamai" Dec 16 13:22:08.506991 ignition[831]: fetch: fetch passed Dec 16 13:22:08.510849 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 13:22:08.507058 ignition[831]: Ignition finished successfully Dec 16 13:22:08.515851 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 13:22:08.573010 ignition[838]: Ignition 2.22.0 Dec 16 13:22:08.573063 ignition[838]: Stage: kargs Dec 16 13:22:08.573270 ignition[838]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:22:08.573285 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:22:08.574415 ignition[838]: kargs: kargs passed Dec 16 13:22:08.574476 ignition[838]: Ignition finished successfully Dec 16 13:22:08.576896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 13:22:08.580205 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 13:22:08.613442 ignition[844]: Ignition 2.22.0 Dec 16 13:22:08.613459 ignition[844]: Stage: disks Dec 16 13:22:08.613605 ignition[844]: no configs at "/usr/lib/ignition/base.d" Dec 16 13:22:08.613623 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:22:08.638855 ignition[844]: disks: disks passed Dec 16 13:22:08.638934 ignition[844]: Ignition finished successfully Dec 16 13:22:08.640790 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 13:22:08.642417 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 13:22:08.643374 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 13:22:08.645179 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:22:08.647158 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:22:08.648867 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:22:08.651796 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 13:22:08.687710 systemd-fsck[852]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 13:22:08.691495 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 13:22:08.696149 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 13:22:08.817077 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 13:22:08.817851 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 13:22:08.819476 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 13:22:08.822553 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:22:08.825839 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 13:22:08.829393 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 13:22:08.829474 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 13:22:08.829505 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:22:08.840124 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 13:22:08.843156 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 13:22:08.851103 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (861) Dec 16 13:22:08.857110 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:22:08.857158 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:22:08.865080 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:22:08.865138 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:22:08.869070 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:22:08.872816 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:22:08.921094 initrd-setup-root[885]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 13:22:08.927551 initrd-setup-root[892]: cut: /sysroot/etc/group: No such file or directory Dec 16 13:22:08.933828 initrd-setup-root[899]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 13:22:08.939307 initrd-setup-root[906]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 13:22:09.047204 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 13:22:09.050831 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 13:22:09.052491 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 13:22:09.074698 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 13:22:09.081075 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:22:09.094463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 13:22:09.158739 ignition[974]: INFO : Ignition 2.22.0 Dec 16 13:22:09.158739 ignition[974]: INFO : Stage: mount Dec 16 13:22:09.160535 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:22:09.160535 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:22:09.160535 ignition[974]: INFO : mount: mount passed Dec 16 13:22:09.160535 ignition[974]: INFO : Ignition finished successfully Dec 16 13:22:09.162726 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 13:22:09.166162 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 13:22:09.820222 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 13:22:09.852082 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (985) Dec 16 13:22:09.852146 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 13:22:09.855595 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 13:22:09.865906 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 13:22:09.865962 kernel: BTRFS info (device sda6): turning on async discard Dec 16 13:22:09.865992 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 13:22:09.871130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 13:22:10.008496 ignition[1001]: INFO : Ignition 2.22.0 Dec 16 13:22:10.008496 ignition[1001]: INFO : Stage: files Dec 16 13:22:10.028616 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:22:10.028616 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:22:10.030765 ignition[1001]: DEBUG : files: compiled without relabeling support, skipping Dec 16 13:22:10.031865 ignition[1001]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 13:22:10.031865 ignition[1001]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 13:22:10.034531 ignition[1001]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 13:22:10.035726 ignition[1001]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 13:22:10.037027 ignition[1001]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 13:22:10.036964 unknown[1001]: wrote ssh authorized keys file for user: core Dec 16 13:22:10.039496 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:22:10.039496 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 13:22:10.265495 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 13:22:10.356596 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 13:22:10.358378 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 13:22:10.358378 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 13:22:10.358378 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:22:10.358378 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 13:22:10.358378 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:22:10.358378 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 13:22:10.358378 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:22:10.358378 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 13:22:10.374431 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:22:10.374431 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 13:22:10.374431 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:22:10.374431 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:22:10.374431 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:22:10.374431 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 13:22:10.922446 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 13:22:12.878387 ignition[1001]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 13:22:12.880484 ignition[1001]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 13:22:12.881645 ignition[1001]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:22:12.884439 ignition[1001]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 13:22:12.884439 ignition[1001]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 13:22:12.888409 ignition[1001]: INFO : files: files passed Dec 16 13:22:12.888409 ignition[1001]: INFO : Ignition finished successfully Dec 16 13:22:12.889838 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 13:22:12.895428 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 13:22:12.901713 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 13:22:12.911684 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 13:22:12.913433 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 13:22:12.924224 initrd-setup-root-after-ignition[1032]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:22:12.925675 initrd-setup-root-after-ignition[1036]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:22:12.926797 initrd-setup-root-after-ignition[1032]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 13:22:12.927381 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:22:12.929075 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 13:22:12.932154 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 13:22:12.967577 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 13:22:12.967967 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 13:22:12.970158 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 13:22:12.971259 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 13:22:12.973118 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 13:22:12.975153 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 13:22:13.001142 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:22:13.003492 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 13:22:13.030343 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:22:13.031513 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:22:13.033321 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 13:22:13.034952 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 13:22:13.035147 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 13:22:13.036991 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 13:22:13.038226 systemd[1]: Stopped target basic.target - Basic System. Dec 16 13:22:13.039887 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 13:22:13.041398 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 13:22:13.043055 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 13:22:13.044750 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 13:22:13.046646 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 13:22:13.048364 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 13:22:13.050288 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 13:22:13.051920 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 13:22:13.053849 systemd[1]: Stopped target swap.target - Swaps. Dec 16 13:22:13.055433 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 13:22:13.055631 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 13:22:13.057713 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:22:13.059627 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:22:13.061398 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 13:22:13.061583 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:22:13.063275 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 13:22:13.063561 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 13:22:13.065483 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 13:22:13.065679 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 13:22:13.066784 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 13:22:13.066931 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 13:22:13.071276 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 13:22:13.072303 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 13:22:13.074219 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:22:13.093257 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 13:22:13.094797 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 13:22:13.095847 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:22:13.099789 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 13:22:13.100776 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 13:22:13.115102 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 13:22:13.132085 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 13:22:13.161769 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 13:22:13.240477 ignition[1056]: INFO : Ignition 2.22.0 Dec 16 13:22:13.240477 ignition[1056]: INFO : Stage: umount Dec 16 13:22:13.240477 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 13:22:13.240477 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 16 13:22:13.296830 ignition[1056]: INFO : umount: umount passed Dec 16 13:22:13.296830 ignition[1056]: INFO : Ignition finished successfully Dec 16 13:22:13.243585 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 13:22:13.243825 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 13:22:13.270187 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 13:22:13.270319 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 13:22:13.298336 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 13:22:13.298416 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 13:22:13.300362 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 13:22:13.300425 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 13:22:13.301573 systemd[1]: Stopped target network.target - Network. Dec 16 13:22:13.302916 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 13:22:13.302976 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 13:22:13.304397 systemd[1]: Stopped target paths.target - Path Units. Dec 16 13:22:13.305807 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 13:22:13.305872 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:22:13.307252 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 13:22:13.308695 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 13:22:13.310248 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 13:22:13.310298 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 13:22:13.311681 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 13:22:13.311723 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 13:22:13.313125 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 13:22:13.313191 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 13:22:13.314716 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 13:22:13.314766 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 13:22:13.316534 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 13:22:13.318195 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 13:22:13.320361 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 13:22:13.320485 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 13:22:13.325506 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 13:22:13.325656 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 13:22:13.330249 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 13:22:13.330469 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 13:22:13.336628 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 13:22:13.336992 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 13:22:13.337271 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 13:22:13.340540 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 13:22:13.341817 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 13:22:13.343764 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 13:22:13.343850 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:22:13.348154 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 13:22:13.350009 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 13:22:13.351143 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 13:22:13.352766 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 13:22:13.352842 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:22:13.375493 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 13:22:13.375580 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 13:22:13.378334 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 13:22:13.378394 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:22:13.381275 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:22:13.384318 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 13:22:13.384399 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:22:13.393616 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 13:22:13.394305 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:22:13.397811 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 13:22:13.397958 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 13:22:13.399935 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 13:22:13.400010 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 13:22:13.401098 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 13:22:13.401152 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:22:13.402673 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 13:22:13.402732 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 13:22:13.404831 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 13:22:13.404889 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 13:22:13.406436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 13:22:13.406491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 13:22:13.410152 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 13:22:13.411487 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 13:22:13.411550 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:22:13.416125 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 13:22:13.416202 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:22:13.417274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 13:22:13.417351 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:22:13.421699 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 13:22:13.421789 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 13:22:13.421867 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 13:22:13.432721 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 13:22:13.432908 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 13:22:13.435073 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 13:22:13.437676 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 13:22:13.462708 systemd[1]: Switching root. Dec 16 13:22:13.510679 systemd-journald[187]: Journal stopped Dec 16 13:22:14.970236 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 16 13:22:14.970271 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 13:22:14.970284 kernel: SELinux: policy capability open_perms=1 Dec 16 13:22:14.970293 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 13:22:14.970302 kernel: SELinux: policy capability always_check_network=0 Dec 16 13:22:14.970313 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 13:22:14.970323 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 13:22:14.970332 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 13:22:14.970341 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 13:22:14.970350 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 13:22:14.970360 kernel: audit: type=1403 audit(1765891333.719:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 13:22:14.970370 systemd[1]: Successfully loaded SELinux policy in 96.392ms. Dec 16 13:22:14.970383 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.232ms. Dec 16 13:22:14.970394 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 13:22:14.970405 systemd[1]: Detected virtualization kvm. Dec 16 13:22:14.970415 systemd[1]: Detected architecture x86-64. Dec 16 13:22:14.970427 systemd[1]: Detected first boot. Dec 16 13:22:14.970438 systemd[1]: Initializing machine ID from random generator. Dec 16 13:22:14.970448 zram_generator::config[1103]: No configuration found. Dec 16 13:22:14.970459 kernel: Guest personality initialized and is inactive Dec 16 13:22:14.970468 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 13:22:14.970477 kernel: Initialized host personality Dec 16 13:22:14.970487 kernel: NET: Registered PF_VSOCK protocol family Dec 16 13:22:14.970497 systemd[1]: Populated /etc with preset unit settings. Dec 16 13:22:14.970517 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 13:22:14.970528 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 13:22:14.970538 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 13:22:14.970548 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 13:22:14.970558 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 13:22:14.970568 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 13:22:14.970579 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 13:22:14.970591 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 13:22:14.970602 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 13:22:14.970612 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 13:22:14.970623 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 13:22:14.970633 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 13:22:14.970643 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 13:22:14.970655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 13:22:14.970665 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 13:22:14.970678 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 13:22:14.970691 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 13:22:14.970702 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 13:22:14.970713 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 13:22:14.970723 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 13:22:14.970734 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 13:22:14.970744 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 13:22:14.970757 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 13:22:14.970767 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 13:22:14.970777 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 13:22:14.970788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 13:22:14.970798 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 13:22:14.970809 systemd[1]: Reached target slices.target - Slice Units. Dec 16 13:22:14.970819 systemd[1]: Reached target swap.target - Swaps. Dec 16 13:22:14.970829 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 13:22:14.970840 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 13:22:14.970852 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 13:22:14.970863 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 13:22:14.970874 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 13:22:14.970884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 13:22:14.970896 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 13:22:14.970916 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 13:22:14.970927 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 13:22:14.970937 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 13:22:14.970948 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:22:14.970958 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 13:22:14.970968 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 13:22:14.970979 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 13:22:14.970992 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 13:22:14.971002 systemd[1]: Reached target machines.target - Containers. Dec 16 13:22:14.971013 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 13:22:14.971023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:22:14.971048 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 13:22:14.971059 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 13:22:14.971069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:22:14.971080 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:22:14.971090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:22:14.971103 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 13:22:14.971113 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:22:14.971124 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 13:22:14.971135 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 13:22:14.971145 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 13:22:14.971163 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 13:22:14.971173 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 13:22:14.971185 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:22:14.971198 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 13:22:14.971208 kernel: ACPI: bus type drm_connector registered Dec 16 13:22:14.971219 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 13:22:14.971229 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 13:22:14.971240 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 13:22:14.971250 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 13:22:14.971260 kernel: fuse: init (API version 7.41) Dec 16 13:22:14.971271 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 13:22:14.971284 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 13:22:14.971309 kernel: loop: module loaded Dec 16 13:22:14.971320 systemd[1]: Stopped verity-setup.service. Dec 16 13:22:14.971330 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:22:14.971341 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 13:22:14.971351 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 13:22:14.971361 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 13:22:14.971406 systemd-journald[1187]: Collecting audit messages is disabled. Dec 16 13:22:14.971431 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 13:22:14.971442 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 13:22:14.971480 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 13:22:14.971490 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 13:22:14.971501 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 13:22:14.971513 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 13:22:14.971524 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 13:22:14.971534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:22:14.971545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:22:14.971556 systemd-journald[1187]: Journal started Dec 16 13:22:14.971576 systemd-journald[1187]: Runtime Journal (/run/log/journal/c46a5eeb59fc4c8db0e4803ec67d14eb) is 8M, max 78.2M, 70.2M free. Dec 16 13:22:14.404255 systemd[1]: Queued start job for default target multi-user.target. Dec 16 13:22:14.431917 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 13:22:14.432714 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 13:22:14.975405 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 13:22:14.976392 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:22:14.976733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:22:14.977920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:22:14.978250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:22:14.979365 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 13:22:14.979633 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 13:22:14.980695 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:22:14.980892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:22:14.982023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 13:22:14.983163 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 13:22:14.984297 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 13:22:14.998006 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 13:22:15.003118 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 13:22:15.047601 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 13:22:15.052342 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 13:22:15.052380 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 13:22:15.055451 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 13:22:15.058991 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 13:22:15.059971 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:22:15.063192 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 13:22:15.064841 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 13:22:15.065641 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:22:15.066634 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 13:22:15.067429 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:22:15.084205 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 13:22:15.091530 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 13:22:15.095150 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 13:22:15.120124 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 13:22:15.122636 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 13:22:15.124539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 13:22:15.127530 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 13:22:15.166050 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 13:22:15.237300 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 13:22:15.288975 systemd-journald[1187]: Time spent on flushing to /var/log/journal/c46a5eeb59fc4c8db0e4803ec67d14eb is 59.202ms for 1012 entries. Dec 16 13:22:15.288975 systemd-journald[1187]: System Journal (/var/log/journal/c46a5eeb59fc4c8db0e4803ec67d14eb) is 8M, max 195.6M, 187.6M free. Dec 16 13:22:15.370662 systemd-journald[1187]: Received client request to flush runtime journal. Dec 16 13:22:15.370698 kernel: loop0: detected capacity change from 0 to 8 Dec 16 13:22:15.370735 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 13:22:15.370758 kernel: loop1: detected capacity change from 0 to 110984 Dec 16 13:22:15.284086 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 13:22:15.318173 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 13:22:15.349417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 13:22:15.372638 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 13:22:15.380702 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 13:22:15.386809 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 13:22:15.392160 kernel: loop2: detected capacity change from 0 to 224512 Dec 16 13:22:15.435234 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 13:22:15.448115 kernel: loop3: detected capacity change from 0 to 128560 Dec 16 13:22:15.467459 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Dec 16 13:22:15.467902 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Dec 16 13:22:15.485725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 13:22:15.505824 kernel: loop4: detected capacity change from 0 to 8 Dec 16 13:22:15.529174 kernel: loop5: detected capacity change from 0 to 110984 Dec 16 13:22:15.586295 kernel: loop6: detected capacity change from 0 to 224512 Dec 16 13:22:15.651102 kernel: loop7: detected capacity change from 0 to 128560 Dec 16 13:22:15.971482 (sd-merge)[1251]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 16 13:22:15.972695 (sd-merge)[1251]: Merged extensions into '/usr'. Dec 16 13:22:15.978866 systemd[1]: Reload requested from client PID 1226 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 13:22:15.978890 systemd[1]: Reloading... Dec 16 13:22:16.208083 zram_generator::config[1273]: No configuration found. Dec 16 13:22:16.825839 systemd[1]: Reloading finished in 846 ms. Dec 16 13:22:16.840481 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 13:22:16.874317 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 13:22:16.882696 ldconfig[1221]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 13:22:16.883385 systemd[1]: Starting ensure-sysext.service... Dec 16 13:22:16.887170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 13:22:16.891182 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 13:22:16.912421 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 13:22:16.935197 systemd[1]: Reload requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Dec 16 13:22:16.935219 systemd[1]: Reloading... Dec 16 13:22:16.961674 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Dec 16 13:22:16.967571 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 13:22:16.967814 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 13:22:16.968328 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 13:22:16.968779 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 13:22:16.970264 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 13:22:16.970676 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Dec 16 13:22:16.970774 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Dec 16 13:22:16.977447 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:22:16.977462 systemd-tmpfiles[1321]: Skipping /boot Dec 16 13:22:17.005004 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 13:22:17.006291 systemd-tmpfiles[1321]: Skipping /boot Dec 16 13:22:17.085128 zram_generator::config[1352]: No configuration found. Dec 16 13:22:17.408775 systemd[1]: Reloading finished in 473 ms. Dec 16 13:22:17.433431 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 13:22:17.435697 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 13:22:17.467945 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 13:22:17.480351 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 13:22:17.485321 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 13:22:17.503571 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 13:22:17.507749 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 13:22:17.508525 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 13:22:17.515318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 13:22:17.521125 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 13:22:17.527078 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 13:22:17.539302 kernel: ACPI: button: Power Button [PWRF] Dec 16 13:22:17.541412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:22:17.541736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:22:17.552680 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:22:17.556896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 13:22:17.562873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 13:22:17.564204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:22:17.564525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:22:17.565174 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:22:17.570554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:22:17.570730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:22:17.570894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:22:17.570968 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:22:17.578106 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 13:22:17.579141 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:22:17.585566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:22:17.585838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:22:17.589859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:22:17.590426 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 13:22:17.595351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 13:22:17.604305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 13:22:17.606295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 13:22:17.606439 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 13:22:17.606588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 13:22:17.623127 systemd[1]: Finished ensure-sysext.service. Dec 16 13:22:17.625595 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 13:22:17.641010 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 13:22:17.654197 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 13:22:17.659008 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 13:22:17.667609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 13:22:17.667840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 13:22:17.671477 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 13:22:17.671710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 13:22:17.672629 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 13:22:17.675471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 13:22:17.675708 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 13:22:17.684814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 13:22:17.694974 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 13:22:17.695796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 13:22:17.700263 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 13:22:17.702326 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 13:22:17.724575 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 13:22:17.751622 augenrules[1480]: No rules Dec 16 13:22:17.753678 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 13:22:17.755089 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 13:22:17.756050 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 13:22:17.775063 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 13:22:17.778408 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 13:22:17.830800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 13:22:17.849301 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 13:22:17.923169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 13:22:17.926062 kernel: EDAC MC: Ver: 3.0.0 Dec 16 13:22:17.926538 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 13:22:18.176368 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 13:22:18.207806 systemd-networkd[1433]: lo: Link UP Dec 16 13:22:18.208192 systemd-networkd[1433]: lo: Gained carrier Dec 16 13:22:18.211521 systemd-networkd[1433]: Enumeration completed Dec 16 13:22:18.215255 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:22:18.217061 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 13:22:18.217973 systemd-networkd[1433]: eth0: Link UP Dec 16 13:22:18.218253 systemd-networkd[1433]: eth0: Gained carrier Dec 16 13:22:18.220139 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 13:22:18.235218 systemd-resolved[1434]: Positive Trust Anchors: Dec 16 13:22:18.235238 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 13:22:18.235265 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 13:22:18.240537 systemd-resolved[1434]: Defaulting to hostname 'linux'. Dec 16 13:22:18.259687 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 13:22:18.260610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 13:22:18.261714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 13:22:18.263153 systemd[1]: Reached target network.target - Network. Dec 16 13:22:18.263861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 13:22:18.264704 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 13:22:18.265571 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 13:22:18.266584 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 13:22:18.267377 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 13:22:18.268180 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 13:22:18.269007 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 13:22:18.269065 systemd[1]: Reached target paths.target - Path Units. Dec 16 13:22:18.269828 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 13:22:18.270814 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 13:22:18.271697 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 13:22:18.279332 systemd[1]: Reached target timers.target - Timer Units. Dec 16 13:22:18.281852 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 13:22:18.284863 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 13:22:18.288309 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 13:22:18.289284 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 13:22:18.290093 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 13:22:18.301938 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 13:22:18.303784 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 13:22:18.306687 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 13:22:18.309109 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 13:22:18.312794 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 13:22:18.314787 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 13:22:18.315667 systemd[1]: Reached target basic.target - Basic System. Dec 16 13:22:18.316681 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:22:18.316751 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 13:22:18.318160 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 13:22:18.323107 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 13:22:18.327259 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 13:22:18.332385 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 13:22:18.336810 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 13:22:18.341580 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 13:22:18.342353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 13:22:18.346424 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 13:22:18.351165 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 13:22:18.358194 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 13:22:18.413332 jq[1523]: false Dec 16 13:22:18.416172 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 13:22:18.507133 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Dec 16 13:22:18.513996 extend-filesystems[1524]: Found /dev/sda6 Dec 16 13:22:18.509160 oslogin_cache_refresh[1525]: Failure getting users, quitting Dec 16 13:22:18.516227 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Dec 16 13:22:18.516227 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Dec 16 13:22:18.516227 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:22:18.516227 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Dec 16 13:22:18.516227 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Dec 16 13:22:18.516227 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:22:18.509185 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 13:22:18.509316 oslogin_cache_refresh[1525]: Refreshing group entry cache Dec 16 13:22:18.509777 oslogin_cache_refresh[1525]: Failure getting groups, quitting Dec 16 13:22:18.509787 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 13:22:18.519548 extend-filesystems[1524]: Found /dev/sda9 Dec 16 13:22:18.527542 extend-filesystems[1524]: Checking size of /dev/sda9 Dec 16 13:22:18.528943 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 13:22:18.549600 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 13:22:18.551950 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 13:22:18.553010 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 13:22:18.584342 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 13:22:18.590526 extend-filesystems[1524]: Resized partition /dev/sda9 Dec 16 13:22:18.596156 extend-filesystems[1553]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 13:22:18.722407 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 16 13:22:18.725246 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 13:22:18.747332 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 13:22:18.751579 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 13:22:18.752091 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 13:22:18.752568 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 13:22:18.753474 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 13:22:18.755476 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 13:22:18.755928 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 13:22:18.761338 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 13:22:18.782401 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 13:22:18.823134 jq[1551]: true Dec 16 13:22:18.961007 coreos-metadata[1520]: Dec 16 13:22:18.889 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 16 13:22:18.851778 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 13:22:18.963665 update_engine[1546]: I20251216 13:22:18.840867 1546 main.cc:92] Flatcar Update Engine starting Dec 16 13:22:18.933632 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 13:22:18.964353 jq[1568]: true Dec 16 13:22:18.966157 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 13:22:18.969288 dbus-daemon[1521]: [system] SELinux support is enabled Dec 16 13:22:18.966189 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 13:22:18.969571 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 13:22:18.991495 systemd-logind[1540]: New seat seat0. Dec 16 13:22:19.020322 update_engine[1546]: I20251216 13:22:19.019968 1546 update_check_scheduler.cc:74] Next update check in 4m47s Dec 16 13:22:19.019395 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 13:22:18.997570 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 13:22:19.014128 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 13:22:19.014166 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 13:22:19.015011 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 13:22:19.015029 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 13:22:19.017113 systemd[1]: Started update-engine.service - Update Engine. Dec 16 13:22:19.061214 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 13:22:19.063571 tar[1556]: linux-amd64/LICENSE Dec 16 13:22:19.064211 tar[1556]: linux-amd64/helm Dec 16 13:22:19.251987 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:22:19.254778 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 13:22:19.260512 systemd[1]: Starting sshkeys.service... Dec 16 13:22:19.451257 systemd-networkd[1433]: eth0: Gained IPv6LL Dec 16 13:22:19.514236 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Dec 16 13:22:19.612379 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 13:22:19.714728 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 13:22:19.834069 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 13:22:20.135297 systemd-networkd[1433]: eth0: DHCPv4 address 172.236.100.113/24, gateway 172.236.100.1 acquired from 23.192.120.212 Dec 16 13:22:20.135922 dbus-daemon[1521]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1433 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 13:22:20.194336 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Dec 16 13:22:20.202488 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Dec 16 13:22:20.215074 coreos-metadata[1520]: Dec 16 13:22:20.202 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 16 13:22:20.215933 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 13:22:20.221206 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 13:22:20.247309 coreos-metadata[1593]: Dec 16 13:22:20.232 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 16 13:22:20.312296 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 13:22:20.354613 coreos-metadata[1520]: Dec 16 13:22:20.331 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 16 13:22:20.351278 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 13:22:20.351485 locksmithd[1574]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 13:22:20.352666 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 13:22:20.357825 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 13:22:20.381684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:22:20.414698 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 13:22:20.508890 systemd[1]: Started sshd@0-172.236.100.113:22-139.178.89.65:60612.service - OpenSSH per-connection server daemon (139.178.89.65:60612). Dec 16 13:22:20.559022 coreos-metadata[1593]: Dec 16 13:22:20.555 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 16 13:22:20.559203 coreos-metadata[1520]: Dec 16 13:22:20.555 INFO Fetch successful Dec 16 13:22:20.559203 coreos-metadata[1520]: Dec 16 13:22:20.555 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 16 13:22:20.618826 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 13:22:20.619129 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 13:22:20.630862 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 13:22:20.703296 coreos-metadata[1593]: Dec 16 13:22:20.703 INFO Fetch successful Dec 16 13:22:20.734170 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 13:22:20.776587 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 13:22:20.810270 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 13:22:20.886611 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 13:22:20.906009 coreos-metadata[1520]: Dec 16 13:22:20.879 INFO Fetch successful Dec 16 13:22:20.887868 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 13:22:20.928342 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 13:22:20.930781 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 13:22:20.931383 dbus-daemon[1521]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1604 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 13:22:20.964885 update-ssh-keys[1637]: Updated "/home/core/.ssh/authorized_keys" Dec 16 13:22:20.965553 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 13:22:20.968095 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 13:22:20.982176 systemd[1]: Finished sshkeys.service. Dec 16 13:22:20.989527 containerd[1559]: time="2025-12-16T13:22:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 13:22:21.015019 containerd[1559]: time="2025-12-16T13:22:21.014943269Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.219597292Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.61µs" Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220295362Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220321272Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220547092Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220572092Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220605442Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220685512Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220697732Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220966352Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.220981002Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.221005042Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222281 containerd[1559]: time="2025-12-16T13:22:21.221012862Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222642 containerd[1559]: time="2025-12-16T13:22:21.221140933Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222642 containerd[1559]: time="2025-12-16T13:22:21.221435643Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222642 containerd[1559]: time="2025-12-16T13:22:21.221468223Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 13:22:21.222642 containerd[1559]: time="2025-12-16T13:22:21.221476993Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 13:22:21.222642 containerd[1559]: time="2025-12-16T13:22:21.221532673Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 13:22:21.222642 containerd[1559]: time="2025-12-16T13:22:21.221856553Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 13:22:21.222642 containerd[1559]: time="2025-12-16T13:22:21.221931053Z" level=info msg="metadata content store policy set" policy=shared Dec 16 13:22:21.229102 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273127098Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273261469Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273357049Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273402579Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273434379Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273466249Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273477619Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273495349Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273507999Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273517819Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273546139Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273558489Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273742599Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 13:22:21.274648 containerd[1559]: time="2025-12-16T13:22:21.273787529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273803339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273813199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273823249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273832449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273862629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273873789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273884249Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273893719Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273903379Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273981139Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.273995739Z" level=info msg="Start snapshots syncer" Dec 16 13:22:21.275000 containerd[1559]: time="2025-12-16T13:22:21.274061799Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 13:22:21.275290 containerd[1559]: time="2025-12-16T13:22:21.274537629Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 13:22:21.275290 containerd[1559]: time="2025-12-16T13:22:21.274618929Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309531697Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309727497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309754077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309766037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309776057Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309808647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309818777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309829667Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309864497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309874967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309884467Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309928337Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309946807Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 13:22:21.313642 containerd[1559]: time="2025-12-16T13:22:21.309956407Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.309965487Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.309973197Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.309981777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.310002267Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.310023837Z" level=info msg="runtime interface created" Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.310028987Z" level=info msg="created NRI interface" Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.310921427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.311089157Z" level=info msg="Connect containerd service" Dec 16 13:22:21.314108 containerd[1559]: time="2025-12-16T13:22:21.311136297Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 13:22:21.315477 extend-filesystems[1553]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 13:22:21.315477 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 16 13:22:21.315477 extend-filesystems[1553]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 16 13:22:21.324926 extend-filesystems[1524]: Resized filesystem in /dev/sda9 Dec 16 13:22:21.327203 containerd[1559]: time="2025-12-16T13:22:21.324430414Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 13:22:21.317730 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 13:22:21.318019 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 13:22:21.386157 sshd[1623]: Accepted publickey for core from 139.178.89.65 port 60612 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:22:21.389691 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:22:21.437673 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 13:22:21.440385 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 13:22:21.447929 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Dec 16 13:22:21.487629 systemd-logind[1540]: New session 1 of user core. Dec 16 13:22:21.537019 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 13:22:21.542625 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 13:22:21.615681 polkitd[1647]: Started polkitd version 126 Dec 16 13:22:21.803876 polkitd[1647]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 13:22:21.804441 polkitd[1647]: Loading rules from directory /run/polkit-1/rules.d Dec 16 13:22:21.804542 polkitd[1647]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:22:21.804848 polkitd[1647]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 13:22:21.833498 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 13:22:21.804852 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 13:22:21.839188 systemd-logind[1540]: New session c1 of user core. Dec 16 13:22:21.804880 polkitd[1647]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 13:22:21.804941 polkitd[1647]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 13:22:21.833106 polkitd[1647]: Finished loading, compiling and executing 2 rules Dec 16 13:22:21.882860 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 13:22:21.883920 polkitd[1647]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 13:22:21.997181 systemd-hostnamed[1604]: Hostname set to <172-236-100-113> (transient) Dec 16 13:22:22.032891 systemd-resolved[1434]: System hostname changed to '172-236-100-113'. Dec 16 13:22:22.248018 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 13:22:22.251896 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 13:22:22.273744 containerd[1559]: time="2025-12-16T13:22:22.273441938Z" level=info msg="Start subscribing containerd event" Dec 16 13:22:22.273744 containerd[1559]: time="2025-12-16T13:22:22.273577748Z" level=info msg="Start recovering state" Dec 16 13:22:22.275048 containerd[1559]: time="2025-12-16T13:22:22.274740509Z" level=info msg="Start event monitor" Dec 16 13:22:22.275048 containerd[1559]: time="2025-12-16T13:22:22.274781079Z" level=info msg="Start cni network conf syncer for default" Dec 16 13:22:22.275048 containerd[1559]: time="2025-12-16T13:22:22.274793939Z" level=info msg="Start streaming server" Dec 16 13:22:22.275048 containerd[1559]: time="2025-12-16T13:22:22.274809649Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 13:22:22.275048 containerd[1559]: time="2025-12-16T13:22:22.274824879Z" level=info msg="runtime interface starting up..." Dec 16 13:22:22.275048 containerd[1559]: time="2025-12-16T13:22:22.274837129Z" level=info msg="starting plugins..." Dec 16 13:22:22.275048 containerd[1559]: time="2025-12-16T13:22:22.274863119Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 13:22:22.276194 containerd[1559]: time="2025-12-16T13:22:22.276171990Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 13:22:22.276421 containerd[1559]: time="2025-12-16T13:22:22.276406010Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 13:22:22.278065 containerd[1559]: time="2025-12-16T13:22:22.277811320Z" level=info msg="containerd successfully booted in 1.289474s" Dec 16 13:22:22.277995 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 13:22:22.447832 systemd[1668]: Queued start job for default target default.target. Dec 16 13:22:22.475013 systemd[1668]: Created slice app.slice - User Application Slice. Dec 16 13:22:22.475087 systemd[1668]: Reached target paths.target - Paths. Dec 16 13:22:22.475158 systemd[1668]: Reached target timers.target - Timers. Dec 16 13:22:22.486389 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 13:22:22.540546 tar[1556]: linux-amd64/README.md Dec 16 13:22:22.558993 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 13:22:22.560790 systemd[1668]: Reached target sockets.target - Sockets. Dec 16 13:22:22.560890 systemd[1668]: Reached target basic.target - Basic System. Dec 16 13:22:22.560946 systemd[1668]: Reached target default.target - Main User Target. Dec 16 13:22:22.560998 systemd[1668]: Startup finished in 581ms. Dec 16 13:22:22.561270 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 13:22:22.577215 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 13:22:22.582425 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 13:22:23.056142 systemd[1]: Started sshd@1-172.236.100.113:22-139.178.89.65:33024.service - OpenSSH per-connection server daemon (139.178.89.65:33024). Dec 16 13:22:23.635711 sshd[1706]: Accepted publickey for core from 139.178.89.65 port 33024 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:22:23.637527 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:22:23.645561 systemd-logind[1540]: New session 2 of user core. Dec 16 13:22:23.659249 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 13:22:23.912326 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Dec 16 13:22:23.917462 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Dec 16 13:22:23.924834 sshd[1709]: Connection closed by 139.178.89.65 port 33024 Dec 16 13:22:23.919986 systemd[1]: sshd@1-172.236.100.113:22-139.178.89.65:33024.service: Deactivated successfully. Dec 16 13:22:23.922870 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 13:22:23.925667 systemd-logind[1540]: Removed session 2. Dec 16 13:22:23.996858 systemd[1]: Started sshd@2-172.236.100.113:22-139.178.89.65:33034.service - OpenSSH per-connection server daemon (139.178.89.65:33034). Dec 16 13:22:24.375400 sshd[1715]: Accepted publickey for core from 139.178.89.65 port 33034 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:22:24.377321 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:22:24.385956 systemd-logind[1540]: New session 3 of user core. Dec 16 13:22:24.393272 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 13:22:24.648667 sshd[1718]: Connection closed by 139.178.89.65 port 33034 Dec 16 13:22:24.649869 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Dec 16 13:22:24.655970 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Dec 16 13:22:24.656555 systemd[1]: sshd@2-172.236.100.113:22-139.178.89.65:33034.service: Deactivated successfully. Dec 16 13:22:24.659419 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 13:22:24.663308 systemd-logind[1540]: Removed session 3. Dec 16 13:22:24.829806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:22:24.831368 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 13:22:24.834355 systemd[1]: Startup finished in 3.019s (kernel) + 15.029s (initrd) + 11.209s (userspace) = 29.258s. Dec 16 13:22:24.888481 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:22:26.392248 kubelet[1728]: E1216 13:22:26.392136 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:22:26.397645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:22:26.398170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:22:26.399582 systemd[1]: kubelet.service: Consumed 3.560s CPU time, 265.5M memory peak. Dec 16 13:22:34.720679 systemd[1]: Started sshd@3-172.236.100.113:22-139.178.89.65:39690.service - OpenSSH per-connection server daemon (139.178.89.65:39690). Dec 16 13:22:35.083802 sshd[1741]: Accepted publickey for core from 139.178.89.65 port 39690 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:22:35.085669 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:22:35.093210 systemd-logind[1540]: New session 4 of user core. Dec 16 13:22:35.100268 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 13:22:35.338765 sshd[1744]: Connection closed by 139.178.89.65 port 39690 Dec 16 13:22:35.339245 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Dec 16 13:22:35.343580 systemd[1]: sshd@3-172.236.100.113:22-139.178.89.65:39690.service: Deactivated successfully. Dec 16 13:22:35.345900 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 13:22:35.347164 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Dec 16 13:22:35.348867 systemd-logind[1540]: Removed session 4. Dec 16 13:22:35.400618 systemd[1]: Started sshd@4-172.236.100.113:22-139.178.89.65:39694.service - OpenSSH per-connection server daemon (139.178.89.65:39694). Dec 16 13:22:35.750893 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 39694 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:22:35.752774 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:22:35.758808 systemd-logind[1540]: New session 5 of user core. Dec 16 13:22:35.766234 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 13:22:35.999480 sshd[1753]: Connection closed by 139.178.89.65 port 39694 Dec 16 13:22:36.001134 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Dec 16 13:22:36.008110 systemd[1]: sshd@4-172.236.100.113:22-139.178.89.65:39694.service: Deactivated successfully. Dec 16 13:22:36.010163 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 13:22:36.010993 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Dec 16 13:22:36.012535 systemd-logind[1540]: Removed session 5. Dec 16 13:22:36.069761 systemd[1]: Started sshd@5-172.236.100.113:22-139.178.89.65:39702.service - OpenSSH per-connection server daemon (139.178.89.65:39702). Dec 16 13:22:36.443708 sshd[1759]: Accepted publickey for core from 139.178.89.65 port 39702 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:22:36.445553 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:22:36.446817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 13:22:36.450221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:22:36.454235 systemd-logind[1540]: New session 6 of user core. Dec 16 13:22:36.467878 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 13:22:36.707124 sshd[1765]: Connection closed by 139.178.89.65 port 39702 Dec 16 13:22:36.709482 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Dec 16 13:22:36.717725 systemd[1]: sshd@5-172.236.100.113:22-139.178.89.65:39702.service: Deactivated successfully. Dec 16 13:22:36.719391 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Dec 16 13:22:36.721623 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 13:22:36.728344 systemd-logind[1540]: Removed session 6. Dec 16 13:22:36.775932 systemd[1]: Started sshd@6-172.236.100.113:22-139.178.89.65:39718.service - OpenSSH per-connection server daemon (139.178.89.65:39718). Dec 16 13:22:36.810341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:22:36.824584 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:22:36.900445 kubelet[1779]: E1216 13:22:36.900353 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:22:36.906110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:22:36.906370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:22:36.906813 systemd[1]: kubelet.service: Consumed 396ms CPU time, 108M memory peak. Dec 16 13:22:37.138857 sshd[1771]: Accepted publickey for core from 139.178.89.65 port 39718 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:22:37.140977 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:22:37.149186 systemd-logind[1540]: New session 7 of user core. Dec 16 13:22:37.160437 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 13:22:37.357696 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 13:22:37.358293 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 13:22:39.464770 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 13:22:39.491890 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 13:22:40.755861 dockerd[1804]: time="2025-12-16T13:22:40.754673953Z" level=info msg="Starting up" Dec 16 13:22:40.756698 dockerd[1804]: time="2025-12-16T13:22:40.756671244Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 13:22:40.832200 dockerd[1804]: time="2025-12-16T13:22:40.832134741Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 13:22:40.951090 dockerd[1804]: time="2025-12-16T13:22:40.950995071Z" level=info msg="Loading containers: start." Dec 16 13:22:40.980112 kernel: Initializing XFRM netlink socket Dec 16 13:22:41.290412 systemd-timesyncd[1453]: Network configuration changed, trying to establish connection. Dec 16 13:22:41.350379 systemd-networkd[1433]: docker0: Link UP Dec 16 13:22:41.354000 dockerd[1804]: time="2025-12-16T13:22:41.353957902Z" level=info msg="Loading containers: done." Dec 16 13:22:41.389738 dockerd[1804]: time="2025-12-16T13:22:41.389692720Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 13:22:41.389949 dockerd[1804]: time="2025-12-16T13:22:41.389786020Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 13:22:41.389949 dockerd[1804]: time="2025-12-16T13:22:41.389881650Z" level=info msg="Initializing buildkit" Dec 16 13:22:41.411554 dockerd[1804]: time="2025-12-16T13:22:41.411500401Z" level=info msg="Completed buildkit initialization" Dec 16 13:22:41.418700 dockerd[1804]: time="2025-12-16T13:22:41.418666204Z" level=info msg="Daemon has completed initialization" Dec 16 13:22:41.418905 dockerd[1804]: time="2025-12-16T13:22:41.418846364Z" level=info msg="API listen on /run/docker.sock" Dec 16 13:22:41.422459 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 13:22:42.765093 containerd[1559]: time="2025-12-16T13:22:42.764954907Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 13:22:45.071890 systemd-resolved[1434]: Clock change detected. Flushing caches. Dec 16 13:22:45.072083 systemd-timesyncd[1453]: Contacted time server [2602:81b:9000::c10c]:123 (2.flatcar.pool.ntp.org). Dec 16 13:22:45.072181 systemd-timesyncd[1453]: Initial clock synchronization to Tue 2025-12-16 13:22:45.069974 UTC. Dec 16 13:22:45.423913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742141873.mount: Deactivated successfully. Dec 16 13:22:48.000097 containerd[1559]: time="2025-12-16T13:22:47.999491815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:48.001807 containerd[1559]: time="2025-12-16T13:22:48.001412435Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 16 13:22:48.005730 containerd[1559]: time="2025-12-16T13:22:48.003829407Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:48.009538 containerd[1559]: time="2025-12-16T13:22:48.009483210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:48.011378 containerd[1559]: time="2025-12-16T13:22:48.011312910Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 3.513201645s" Dec 16 13:22:48.011666 containerd[1559]: time="2025-12-16T13:22:48.011633901Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 13:22:48.016785 containerd[1559]: time="2025-12-16T13:22:48.016723003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 13:22:48.718377 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 13:22:48.722079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:22:49.185246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:22:49.211247 (kubelet)[2079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:22:49.550288 kubelet[2079]: E1216 13:22:49.550204 2079 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:22:49.554790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:22:49.555057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:22:49.556145 systemd[1]: kubelet.service: Consumed 720ms CPU time, 110.2M memory peak. Dec 16 13:22:50.830868 containerd[1559]: time="2025-12-16T13:22:50.830761259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:50.832634 containerd[1559]: time="2025-12-16T13:22:50.832220160Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 16 13:22:50.833443 containerd[1559]: time="2025-12-16T13:22:50.833404321Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:50.837081 containerd[1559]: time="2025-12-16T13:22:50.837036192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:50.838375 containerd[1559]: time="2025-12-16T13:22:50.838327003Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 2.82145687s" Dec 16 13:22:50.838375 containerd[1559]: time="2025-12-16T13:22:50.838373903Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 13:22:50.840233 containerd[1559]: time="2025-12-16T13:22:50.840193194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 13:22:54.144298 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 13:22:54.232139 containerd[1559]: time="2025-12-16T13:22:54.230125018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:54.236705 containerd[1559]: time="2025-12-16T13:22:54.236629781Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 16 13:22:54.240048 containerd[1559]: time="2025-12-16T13:22:54.239613552Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:54.268148 containerd[1559]: time="2025-12-16T13:22:54.264523085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:54.268148 containerd[1559]: time="2025-12-16T13:22:54.266182756Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 3.425925792s" Dec 16 13:22:54.268148 containerd[1559]: time="2025-12-16T13:22:54.266326356Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 13:22:54.274068 containerd[1559]: time="2025-12-16T13:22:54.274006770Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 13:22:57.573316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2396486212.mount: Deactivated successfully. Dec 16 13:22:58.579840 containerd[1559]: time="2025-12-16T13:22:58.579727871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:58.581047 containerd[1559]: time="2025-12-16T13:22:58.581001122Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 16 13:22:58.582048 containerd[1559]: time="2025-12-16T13:22:58.581532362Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:58.583402 containerd[1559]: time="2025-12-16T13:22:58.583355543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:22:58.584134 containerd[1559]: time="2025-12-16T13:22:58.584090933Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 4.309848293s" Dec 16 13:22:58.584195 containerd[1559]: time="2025-12-16T13:22:58.584162833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 13:22:58.586310 containerd[1559]: time="2025-12-16T13:22:58.586284294Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 13:22:59.259259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1242986620.mount: Deactivated successfully. Dec 16 13:22:59.720897 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 13:22:59.728264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:23:00.247705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:23:00.259800 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 13:23:00.672932 kubelet[2166]: E1216 13:23:00.672468 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 13:23:00.676641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 13:23:00.677264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 13:23:00.677871 systemd[1]: kubelet.service: Consumed 792ms CPU time, 110.5M memory peak. Dec 16 13:23:01.282058 containerd[1559]: time="2025-12-16T13:23:01.281399581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:01.282773 containerd[1559]: time="2025-12-16T13:23:01.282340731Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 16 13:23:01.284891 containerd[1559]: time="2025-12-16T13:23:01.284809203Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:01.289299 containerd[1559]: time="2025-12-16T13:23:01.289206375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:01.291043 containerd[1559]: time="2025-12-16T13:23:01.290606095Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.704258261s" Dec 16 13:23:01.291043 containerd[1559]: time="2025-12-16T13:23:01.290756766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 13:23:01.294453 containerd[1559]: time="2025-12-16T13:23:01.294419117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 13:23:01.934224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2030541703.mount: Deactivated successfully. Dec 16 13:23:01.939953 containerd[1559]: time="2025-12-16T13:23:01.939908650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:23:01.940943 containerd[1559]: time="2025-12-16T13:23:01.940898070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 16 13:23:01.941413 containerd[1559]: time="2025-12-16T13:23:01.941354991Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:23:01.945229 containerd[1559]: time="2025-12-16T13:23:01.945191293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 13:23:01.946260 containerd[1559]: time="2025-12-16T13:23:01.946231393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 651.650296ms" Dec 16 13:23:01.946364 containerd[1559]: time="2025-12-16T13:23:01.946342293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 13:23:01.947534 containerd[1559]: time="2025-12-16T13:23:01.947510194Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 13:23:02.647675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784842382.mount: Deactivated successfully. Dec 16 13:23:06.312100 update_engine[1546]: I20251216 13:23:06.310335 1546 update_attempter.cc:509] Updating boot flags... Dec 16 13:23:06.853318 containerd[1559]: time="2025-12-16T13:23:06.853201985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:06.854634 containerd[1559]: time="2025-12-16T13:23:06.854600016Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 16 13:23:06.856071 containerd[1559]: time="2025-12-16T13:23:06.856008526Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:06.859486 containerd[1559]: time="2025-12-16T13:23:06.859442218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:06.860703 containerd[1559]: time="2025-12-16T13:23:06.860674709Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.913031845s" Dec 16 13:23:06.860836 containerd[1559]: time="2025-12-16T13:23:06.860815619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 13:23:08.846532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:23:08.846725 systemd[1]: kubelet.service: Consumed 792ms CPU time, 110.5M memory peak. Dec 16 13:23:08.850724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:23:08.884427 systemd[1]: Reload requested from client PID 2281 ('systemctl') (unit session-7.scope)... Dec 16 13:23:08.884454 systemd[1]: Reloading... Dec 16 13:23:09.058135 zram_generator::config[2324]: No configuration found. Dec 16 13:23:09.387716 systemd[1]: Reloading finished in 502 ms. Dec 16 13:23:09.459829 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 13:23:09.460169 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 13:23:09.460669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:23:09.460797 systemd[1]: kubelet.service: Consumed 285ms CPU time, 98.3M memory peak. Dec 16 13:23:09.462809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:23:09.679445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:23:09.688638 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:23:09.788517 kubelet[2378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:23:09.790039 kubelet[2378]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:23:09.790039 kubelet[2378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:23:09.790039 kubelet[2378]: I1216 13:23:09.789152 2378 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:23:10.101426 kubelet[2378]: I1216 13:23:10.101375 2378 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:23:10.101426 kubelet[2378]: I1216 13:23:10.101416 2378 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:23:10.105811 kubelet[2378]: I1216 13:23:10.105541 2378 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:23:10.134231 kubelet[2378]: E1216 13:23:10.134158 2378 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.236.100.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:10.135979 kubelet[2378]: I1216 13:23:10.135521 2378 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:23:10.145825 kubelet[2378]: I1216 13:23:10.145772 2378 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:23:10.153958 kubelet[2378]: I1216 13:23:10.153903 2378 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:23:10.155551 kubelet[2378]: I1216 13:23:10.155495 2378 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:23:10.155835 kubelet[2378]: I1216 13:23:10.155541 2378 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-100-113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:23:10.156222 kubelet[2378]: I1216 13:23:10.155908 2378 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:23:10.156222 kubelet[2378]: I1216 13:23:10.155925 2378 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:23:10.156319 kubelet[2378]: I1216 13:23:10.156301 2378 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:23:10.161077 kubelet[2378]: I1216 13:23:10.160881 2378 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:23:10.162766 kubelet[2378]: I1216 13:23:10.162523 2378 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:23:10.162766 kubelet[2378]: I1216 13:23:10.162607 2378 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:23:10.162766 kubelet[2378]: I1216 13:23:10.162680 2378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:23:10.169072 kubelet[2378]: I1216 13:23:10.168532 2378 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:23:10.169624 kubelet[2378]: I1216 13:23:10.169243 2378 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:23:10.169624 kubelet[2378]: W1216 13:23:10.169418 2378 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 13:23:10.172556 kubelet[2378]: I1216 13:23:10.172530 2378 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:23:10.172801 kubelet[2378]: I1216 13:23:10.172615 2378 server.go:1287] "Started kubelet" Dec 16 13:23:10.172928 kubelet[2378]: W1216 13:23:10.172847 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.100.113:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-100-113&limit=500&resourceVersion=0": dial tcp 172.236.100.113:6443: connect: connection refused Dec 16 13:23:10.172959 kubelet[2378]: E1216 13:23:10.172922 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.100.113:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-100-113&limit=500&resourceVersion=0\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:10.174582 kubelet[2378]: W1216 13:23:10.174553 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.100.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.236.100.113:6443: connect: connection refused Dec 16 13:23:10.174691 kubelet[2378]: E1216 13:23:10.174664 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.236.100.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:10.174939 kubelet[2378]: I1216 13:23:10.174896 2378 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:23:10.179494 kubelet[2378]: I1216 13:23:10.179377 2378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:23:10.180217 kubelet[2378]: I1216 13:23:10.180192 2378 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:23:10.181139 kubelet[2378]: I1216 13:23:10.181105 2378 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:23:10.183215 kubelet[2378]: E1216 13:23:10.181561 2378 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.100.113:6443/api/v1/namespaces/default/events\": dial tcp 172.236.100.113:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-100-113.1881b4dcff27fa97 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-100-113,UID:172-236-100-113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-100-113,},FirstTimestamp:2025-12-16 13:23:10.172560023 +0000 UTC m=+0.446655034,LastTimestamp:2025-12-16 13:23:10.172560023 +0000 UTC m=+0.446655034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-100-113,}" Dec 16 13:23:10.184307 kubelet[2378]: I1216 13:23:10.184288 2378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:23:10.187552 kubelet[2378]: I1216 13:23:10.187502 2378 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:23:10.193758 kubelet[2378]: E1216 13:23:10.193549 2378 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-100-113\" not found" Dec 16 13:23:10.193758 kubelet[2378]: I1216 13:23:10.193629 2378 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:23:10.195389 kubelet[2378]: I1216 13:23:10.195356 2378 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:23:10.195442 kubelet[2378]: I1216 13:23:10.195438 2378 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:23:10.196069 kubelet[2378]: W1216 13:23:10.195892 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.100.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.100.113:6443: connect: connection refused Dec 16 13:23:10.196069 kubelet[2378]: E1216 13:23:10.195975 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.236.100.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:10.196262 kubelet[2378]: E1216 13:23:10.196163 2378 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:23:10.238199 kubelet[2378]: E1216 13:23:10.223172 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.100.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-100-113?timeout=10s\": dial tcp 172.236.100.113:6443: connect: connection refused" interval="200ms" Dec 16 13:23:10.238199 kubelet[2378]: I1216 13:23:10.223922 2378 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:23:10.238199 kubelet[2378]: I1216 13:23:10.223938 2378 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:23:10.238199 kubelet[2378]: I1216 13:23:10.224096 2378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:23:10.263254 kubelet[2378]: I1216 13:23:10.263179 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:23:10.265849 kubelet[2378]: I1216 13:23:10.265314 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:23:10.265849 kubelet[2378]: I1216 13:23:10.265430 2378 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:23:10.265849 kubelet[2378]: I1216 13:23:10.265483 2378 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:23:10.265849 kubelet[2378]: I1216 13:23:10.265496 2378 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:23:10.265849 kubelet[2378]: E1216 13:23:10.265571 2378 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:23:10.271807 kubelet[2378]: W1216 13:23:10.271745 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.100.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.100.113:6443: connect: connection refused Dec 16 13:23:10.272129 kubelet[2378]: E1216 13:23:10.271898 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.236.100.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:10.281227 kubelet[2378]: I1216 13:23:10.281195 2378 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:23:10.281564 kubelet[2378]: I1216 13:23:10.281535 2378 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:23:10.281653 kubelet[2378]: I1216 13:23:10.281629 2378 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:23:10.283620 kubelet[2378]: I1216 13:23:10.283589 2378 policy_none.go:49] "None policy: Start" Dec 16 13:23:10.283664 kubelet[2378]: I1216 13:23:10.283651 2378 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:23:10.283707 kubelet[2378]: I1216 13:23:10.283698 2378 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:23:10.291494 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 13:23:10.294578 kubelet[2378]: E1216 13:23:10.294550 2378 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-100-113\" not found" Dec 16 13:23:10.302753 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 13:23:10.307185 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 13:23:10.318967 kubelet[2378]: I1216 13:23:10.318483 2378 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:23:10.319278 kubelet[2378]: I1216 13:23:10.319209 2378 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:23:10.320505 kubelet[2378]: I1216 13:23:10.319451 2378 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:23:10.320505 kubelet[2378]: I1216 13:23:10.319968 2378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:23:10.322737 kubelet[2378]: E1216 13:23:10.322701 2378 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:23:10.322807 kubelet[2378]: E1216 13:23:10.322793 2378 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-100-113\" not found" Dec 16 13:23:10.380577 systemd[1]: Created slice kubepods-burstable-podf692feef20858866ab99bf6f7f9ff30c.slice - libcontainer container kubepods-burstable-podf692feef20858866ab99bf6f7f9ff30c.slice. Dec 16 13:23:10.396676 kubelet[2378]: E1216 13:23:10.396389 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:10.397705 systemd[1]: Created slice kubepods-burstable-podf508cc8aaafe8901dd5e2eb3bdccf33f.slice - libcontainer container kubepods-burstable-podf508cc8aaafe8901dd5e2eb3bdccf33f.slice. Dec 16 13:23:10.400433 kubelet[2378]: E1216 13:23:10.400354 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:10.416786 systemd[1]: Created slice kubepods-burstable-pod3cc644a09590502aebf2d35a3ce80ac9.slice - libcontainer container kubepods-burstable-pod3cc644a09590502aebf2d35a3ce80ac9.slice. Dec 16 13:23:10.419069 kubelet[2378]: E1216 13:23:10.419043 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:10.422046 kubelet[2378]: I1216 13:23:10.421684 2378 kubelet_node_status.go:75] "Attempting to register node" node="172-236-100-113" Dec 16 13:23:10.422285 kubelet[2378]: E1216 13:23:10.422241 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.100.113:6443/api/v1/nodes\": dial tcp 172.236.100.113:6443: connect: connection refused" node="172-236-100-113" Dec 16 13:23:10.423564 kubelet[2378]: E1216 13:23:10.423531 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.100.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-100-113?timeout=10s\": dial tcp 172.236.100.113:6443: connect: connection refused" interval="400ms" Dec 16 13:23:10.496272 kubelet[2378]: I1216 13:23:10.496183 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-k8s-certs\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:10.496272 kubelet[2378]: I1216 13:23:10.496252 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-kubeconfig\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:10.496511 kubelet[2378]: I1216 13:23:10.496297 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:10.496511 kubelet[2378]: I1216 13:23:10.496334 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cc644a09590502aebf2d35a3ce80ac9-kubeconfig\") pod \"kube-scheduler-172-236-100-113\" (UID: \"3cc644a09590502aebf2d35a3ce80ac9\") " pod="kube-system/kube-scheduler-172-236-100-113" Dec 16 13:23:10.496511 kubelet[2378]: I1216 13:23:10.496362 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-flexvolume-dir\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:10.496511 kubelet[2378]: I1216 13:23:10.496385 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f692feef20858866ab99bf6f7f9ff30c-ca-certs\") pod \"kube-apiserver-172-236-100-113\" (UID: \"f692feef20858866ab99bf6f7f9ff30c\") " pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:10.496511 kubelet[2378]: I1216 13:23:10.496408 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f692feef20858866ab99bf6f7f9ff30c-k8s-certs\") pod \"kube-apiserver-172-236-100-113\" (UID: \"f692feef20858866ab99bf6f7f9ff30c\") " pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:10.496827 kubelet[2378]: I1216 13:23:10.496439 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f692feef20858866ab99bf6f7f9ff30c-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-100-113\" (UID: \"f692feef20858866ab99bf6f7f9ff30c\") " pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:10.496827 kubelet[2378]: I1216 13:23:10.496464 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-ca-certs\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:10.629936 kubelet[2378]: I1216 13:23:10.627648 2378 kubelet_node_status.go:75] "Attempting to register node" node="172-236-100-113" Dec 16 13:23:10.630570 kubelet[2378]: E1216 13:23:10.630545 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.100.113:6443/api/v1/nodes\": dial tcp 172.236.100.113:6443: connect: connection refused" node="172-236-100-113" Dec 16 13:23:10.698789 kubelet[2378]: E1216 13:23:10.698055 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:10.699811 containerd[1559]: time="2025-12-16T13:23:10.699660517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-100-113,Uid:f692feef20858866ab99bf6f7f9ff30c,Namespace:kube-system,Attempt:0,}" Dec 16 13:23:10.703702 kubelet[2378]: E1216 13:23:10.703245 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:10.704167 containerd[1559]: time="2025-12-16T13:23:10.704091699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-100-113,Uid:f508cc8aaafe8901dd5e2eb3bdccf33f,Namespace:kube-system,Attempt:0,}" Dec 16 13:23:10.720362 kubelet[2378]: E1216 13:23:10.720320 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:10.721067 containerd[1559]: time="2025-12-16T13:23:10.720878527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-100-113,Uid:3cc644a09590502aebf2d35a3ce80ac9,Namespace:kube-system,Attempt:0,}" Dec 16 13:23:10.889177 kubelet[2378]: E1216 13:23:10.889108 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.100.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-100-113?timeout=10s\": dial tcp 172.236.100.113:6443: connect: connection refused" interval="800ms" Dec 16 13:23:10.997551 containerd[1559]: time="2025-12-16T13:23:10.997479946Z" level=info msg="connecting to shim 3d8d7610f941def1c2aadcbfe9c89e7ef60edf63f3c4c5d943a5396f7c7f7962" address="unix:///run/containerd/s/c3e171c853e219ca651ae9eccb28ffa110313f5f2978b073778a2995753e025b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:23:11.001044 containerd[1559]: time="2025-12-16T13:23:11.000987637Z" level=info msg="connecting to shim 74adf38b69cdc58143323970b8bdc0d5d23d345f00523fbb2a4d96c73f9df4db" address="unix:///run/containerd/s/5e8104b2dfc0ccfadf7a7db85bf46ac22abb5ef1ecb3528f19c3af9f1c42e84f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:23:11.008325 containerd[1559]: time="2025-12-16T13:23:11.008269301Z" level=info msg="connecting to shim 9bfa96e7eba7c45417d8a45f3042a91df1d0e1d7f4ddc43551431b701fe52ceb" address="unix:///run/containerd/s/ceba35dfa7ddc945d523f52cc5a4636979a32e9c903794efb9c97d9b75ffb42a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:23:11.069381 kubelet[2378]: I1216 13:23:11.068987 2378 kubelet_node_status.go:75] "Attempting to register node" node="172-236-100-113" Dec 16 13:23:11.069381 kubelet[2378]: E1216 13:23:11.069348 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.100.113:6443/api/v1/nodes\": dial tcp 172.236.100.113:6443: connect: connection refused" node="172-236-100-113" Dec 16 13:23:11.151927 kubelet[2378]: W1216 13:23:11.151369 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.100.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.100.113:6443: connect: connection refused Dec 16 13:23:11.151927 kubelet[2378]: E1216 13:23:11.151538 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.236.100.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:11.173743 systemd[1]: Started cri-containerd-74adf38b69cdc58143323970b8bdc0d5d23d345f00523fbb2a4d96c73f9df4db.scope - libcontainer container 74adf38b69cdc58143323970b8bdc0d5d23d345f00523fbb2a4d96c73f9df4db. Dec 16 13:23:11.209415 systemd[1]: Started cri-containerd-9bfa96e7eba7c45417d8a45f3042a91df1d0e1d7f4ddc43551431b701fe52ceb.scope - libcontainer container 9bfa96e7eba7c45417d8a45f3042a91df1d0e1d7f4ddc43551431b701fe52ceb. Dec 16 13:23:11.228309 systemd[1]: Started cri-containerd-3d8d7610f941def1c2aadcbfe9c89e7ef60edf63f3c4c5d943a5396f7c7f7962.scope - libcontainer container 3d8d7610f941def1c2aadcbfe9c89e7ef60edf63f3c4c5d943a5396f7c7f7962. Dec 16 13:23:11.276480 kubelet[2378]: W1216 13:23:11.274974 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.100.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.100.113:6443: connect: connection refused Dec 16 13:23:11.276480 kubelet[2378]: E1216 13:23:11.275162 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.236.100.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:11.476307 kubelet[2378]: W1216 13:23:11.476167 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.100.113:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-100-113&limit=500&resourceVersion=0": dial tcp 172.236.100.113:6443: connect: connection refused Dec 16 13:23:11.476944 kubelet[2378]: E1216 13:23:11.476885 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.100.113:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-100-113&limit=500&resourceVersion=0\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:11.540519 containerd[1559]: time="2025-12-16T13:23:11.539560086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-100-113,Uid:f692feef20858866ab99bf6f7f9ff30c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d8d7610f941def1c2aadcbfe9c89e7ef60edf63f3c4c5d943a5396f7c7f7962\"" Dec 16 13:23:11.540519 containerd[1559]: time="2025-12-16T13:23:11.539923817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-100-113,Uid:3cc644a09590502aebf2d35a3ce80ac9,Namespace:kube-system,Attempt:0,} returns sandbox id \"74adf38b69cdc58143323970b8bdc0d5d23d345f00523fbb2a4d96c73f9df4db\"" Dec 16 13:23:11.543151 kubelet[2378]: E1216 13:23:11.543010 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:11.543648 kubelet[2378]: E1216 13:23:11.543589 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:11.545905 containerd[1559]: time="2025-12-16T13:23:11.545778170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-100-113,Uid:f508cc8aaafe8901dd5e2eb3bdccf33f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bfa96e7eba7c45417d8a45f3042a91df1d0e1d7f4ddc43551431b701fe52ceb\"" Dec 16 13:23:11.549394 kubelet[2378]: E1216 13:23:11.549362 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:11.551970 containerd[1559]: time="2025-12-16T13:23:11.550939422Z" level=info msg="CreateContainer within sandbox \"74adf38b69cdc58143323970b8bdc0d5d23d345f00523fbb2a4d96c73f9df4db\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 13:23:11.551970 containerd[1559]: time="2025-12-16T13:23:11.551007982Z" level=info msg="CreateContainer within sandbox \"3d8d7610f941def1c2aadcbfe9c89e7ef60edf63f3c4c5d943a5396f7c7f7962\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 13:23:11.556311 containerd[1559]: time="2025-12-16T13:23:11.556271385Z" level=info msg="Container 051c90019735eda3b193e1c7277975e7c3a4684d1e11b0d000bcc059b3910c7b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:11.568948 containerd[1559]: time="2025-12-16T13:23:11.568914061Z" level=info msg="CreateContainer within sandbox \"9bfa96e7eba7c45417d8a45f3042a91df1d0e1d7f4ddc43551431b701fe52ceb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 13:23:11.573566 containerd[1559]: time="2025-12-16T13:23:11.573449073Z" level=info msg="CreateContainer within sandbox \"74adf38b69cdc58143323970b8bdc0d5d23d345f00523fbb2a4d96c73f9df4db\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"051c90019735eda3b193e1c7277975e7c3a4684d1e11b0d000bcc059b3910c7b\"" Dec 16 13:23:11.574785 containerd[1559]: time="2025-12-16T13:23:11.574754774Z" level=info msg="StartContainer for \"051c90019735eda3b193e1c7277975e7c3a4684d1e11b0d000bcc059b3910c7b\"" Dec 16 13:23:11.577537 containerd[1559]: time="2025-12-16T13:23:11.577506165Z" level=info msg="connecting to shim 051c90019735eda3b193e1c7277975e7c3a4684d1e11b0d000bcc059b3910c7b" address="unix:///run/containerd/s/5e8104b2dfc0ccfadf7a7db85bf46ac22abb5ef1ecb3528f19c3af9f1c42e84f" protocol=ttrpc version=3 Dec 16 13:23:11.579795 containerd[1559]: time="2025-12-16T13:23:11.579754477Z" level=info msg="Container a60adb220a7945f579824e9c7fd083c6abddbde74a685d09526c06259bdefcec: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:11.580773 containerd[1559]: time="2025-12-16T13:23:11.580752947Z" level=info msg="Container e208acebd404050ada1b2c771a22995a57d9a78bbee2a5e0f1701ae91d114cc5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:11.586446 containerd[1559]: time="2025-12-16T13:23:11.586416790Z" level=info msg="CreateContainer within sandbox \"3d8d7610f941def1c2aadcbfe9c89e7ef60edf63f3c4c5d943a5396f7c7f7962\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a60adb220a7945f579824e9c7fd083c6abddbde74a685d09526c06259bdefcec\"" Dec 16 13:23:11.588875 containerd[1559]: time="2025-12-16T13:23:11.588855211Z" level=info msg="StartContainer for \"a60adb220a7945f579824e9c7fd083c6abddbde74a685d09526c06259bdefcec\"" Dec 16 13:23:11.590071 containerd[1559]: time="2025-12-16T13:23:11.590049192Z" level=info msg="connecting to shim a60adb220a7945f579824e9c7fd083c6abddbde74a685d09526c06259bdefcec" address="unix:///run/containerd/s/c3e171c853e219ca651ae9eccb28ffa110313f5f2978b073778a2995753e025b" protocol=ttrpc version=3 Dec 16 13:23:11.591802 containerd[1559]: time="2025-12-16T13:23:11.591141192Z" level=info msg="CreateContainer within sandbox \"9bfa96e7eba7c45417d8a45f3042a91df1d0e1d7f4ddc43551431b701fe52ceb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e208acebd404050ada1b2c771a22995a57d9a78bbee2a5e0f1701ae91d114cc5\"" Dec 16 13:23:11.592262 containerd[1559]: time="2025-12-16T13:23:11.592244063Z" level=info msg="StartContainer for \"e208acebd404050ada1b2c771a22995a57d9a78bbee2a5e0f1701ae91d114cc5\"" Dec 16 13:23:11.594475 containerd[1559]: time="2025-12-16T13:23:11.594432904Z" level=info msg="connecting to shim e208acebd404050ada1b2c771a22995a57d9a78bbee2a5e0f1701ae91d114cc5" address="unix:///run/containerd/s/ceba35dfa7ddc945d523f52cc5a4636979a32e9c903794efb9c97d9b75ffb42a" protocol=ttrpc version=3 Dec 16 13:23:11.611277 systemd[1]: Started cri-containerd-051c90019735eda3b193e1c7277975e7c3a4684d1e11b0d000bcc059b3910c7b.scope - libcontainer container 051c90019735eda3b193e1c7277975e7c3a4684d1e11b0d000bcc059b3910c7b. Dec 16 13:23:11.633213 systemd[1]: Started cri-containerd-e208acebd404050ada1b2c771a22995a57d9a78bbee2a5e0f1701ae91d114cc5.scope - libcontainer container e208acebd404050ada1b2c771a22995a57d9a78bbee2a5e0f1701ae91d114cc5. Dec 16 13:23:11.641276 systemd[1]: Started cri-containerd-a60adb220a7945f579824e9c7fd083c6abddbde74a685d09526c06259bdefcec.scope - libcontainer container a60adb220a7945f579824e9c7fd083c6abddbde74a685d09526c06259bdefcec. Dec 16 13:23:11.757071 kubelet[2378]: E1216 13:23:11.753901 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.100.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-100-113?timeout=10s\": dial tcp 172.236.100.113:6443: connect: connection refused" interval="1.6s" Dec 16 13:23:11.757387 kubelet[2378]: W1216 13:23:11.754142 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.100.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.236.100.113:6443: connect: connection refused Dec 16 13:23:11.758116 kubelet[2378]: E1216 13:23:11.758075 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.236.100.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.100.113:6443: connect: connection refused" logger="UnhandledError" Dec 16 13:23:11.875493 kubelet[2378]: I1216 13:23:11.874540 2378 kubelet_node_status.go:75] "Attempting to register node" node="172-236-100-113" Dec 16 13:23:11.875493 kubelet[2378]: E1216 13:23:11.874871 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.100.113:6443/api/v1/nodes\": dial tcp 172.236.100.113:6443: connect: connection refused" node="172-236-100-113" Dec 16 13:23:11.931303 containerd[1559]: time="2025-12-16T13:23:11.931224512Z" level=info msg="StartContainer for \"a60adb220a7945f579824e9c7fd083c6abddbde74a685d09526c06259bdefcec\" returns successfully" Dec 16 13:23:11.932166 containerd[1559]: time="2025-12-16T13:23:11.932142183Z" level=info msg="StartContainer for \"051c90019735eda3b193e1c7277975e7c3a4684d1e11b0d000bcc059b3910c7b\" returns successfully" Dec 16 13:23:11.934156 containerd[1559]: time="2025-12-16T13:23:11.934133644Z" level=info msg="StartContainer for \"e208acebd404050ada1b2c771a22995a57d9a78bbee2a5e0f1701ae91d114cc5\" returns successfully" Dec 16 13:23:12.293839 kubelet[2378]: E1216 13:23:12.293382 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:12.297127 kubelet[2378]: E1216 13:23:12.294631 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:12.303618 kubelet[2378]: E1216 13:23:12.303593 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:12.304484 kubelet[2378]: E1216 13:23:12.304413 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:12.305628 kubelet[2378]: E1216 13:23:12.305481 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:12.305628 kubelet[2378]: E1216 13:23:12.305568 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:13.329943 kubelet[2378]: E1216 13:23:13.329870 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:13.331165 kubelet[2378]: E1216 13:23:13.330949 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:13.331534 kubelet[2378]: E1216 13:23:13.331265 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:13.331534 kubelet[2378]: E1216 13:23:13.331494 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:13.480344 kubelet[2378]: I1216 13:23:13.480147 2378 kubelet_node_status.go:75] "Attempting to register node" node="172-236-100-113" Dec 16 13:23:14.326661 kubelet[2378]: E1216 13:23:14.326623 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:14.327259 kubelet[2378]: E1216 13:23:14.327207 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:15.449662 kubelet[2378]: E1216 13:23:15.449489 2378 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-100-113\" not found" node="172-236-100-113" Dec 16 13:23:15.611301 kubelet[2378]: I1216 13:23:15.610053 2378 kubelet_node_status.go:78] "Successfully registered node" node="172-236-100-113" Dec 16 13:23:15.702918 kubelet[2378]: I1216 13:23:15.701788 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:15.710038 kubelet[2378]: E1216 13:23:15.709987 2378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-100-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:15.710486 kubelet[2378]: I1216 13:23:15.710274 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:15.712246 kubelet[2378]: E1216 13:23:15.712010 2378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-100-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:15.712246 kubelet[2378]: I1216 13:23:15.712146 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-100-113" Dec 16 13:23:15.714526 kubelet[2378]: E1216 13:23:15.714468 2378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-100-113\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-100-113" Dec 16 13:23:16.287733 kubelet[2378]: I1216 13:23:16.287654 2378 apiserver.go:52] "Watching apiserver" Dec 16 13:23:16.296481 kubelet[2378]: I1216 13:23:16.296438 2378 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:23:17.690557 systemd[1]: Reload requested from client PID 2643 ('systemctl') (unit session-7.scope)... Dec 16 13:23:17.690590 systemd[1]: Reloading... Dec 16 13:23:17.912160 zram_generator::config[2685]: No configuration found. Dec 16 13:23:18.089960 kubelet[2378]: I1216 13:23:18.089907 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:18.100293 kubelet[2378]: E1216 13:23:18.100242 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:18.209294 systemd[1]: Reloading finished in 518 ms. Dec 16 13:23:18.248440 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:23:18.266833 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 13:23:18.267361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:23:18.267423 systemd[1]: kubelet.service: Consumed 1.191s CPU time, 133.7M memory peak. Dec 16 13:23:18.272068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 13:23:18.664788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 13:23:18.692510 (kubelet)[2737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 13:23:18.892606 kubelet[2737]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:23:18.892606 kubelet[2737]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 13:23:18.892606 kubelet[2737]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 13:23:18.894047 kubelet[2737]: I1216 13:23:18.893225 2737 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 13:23:18.900365 kubelet[2737]: I1216 13:23:18.900341 2737 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 13:23:18.900475 kubelet[2737]: I1216 13:23:18.900465 2737 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 13:23:18.900742 kubelet[2737]: I1216 13:23:18.900729 2737 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 13:23:18.902124 kubelet[2737]: I1216 13:23:18.902108 2737 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 13:23:18.907343 kubelet[2737]: I1216 13:23:18.907320 2737 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 13:23:18.912889 kubelet[2737]: I1216 13:23:18.912868 2737 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 13:23:18.917917 kubelet[2737]: I1216 13:23:18.917832 2737 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 13:23:18.918382 kubelet[2737]: I1216 13:23:18.918349 2737 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 13:23:18.918830 kubelet[2737]: I1216 13:23:18.918450 2737 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-100-113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 13:23:18.919138 kubelet[2737]: I1216 13:23:18.919125 2737 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 13:23:18.919185 kubelet[2737]: I1216 13:23:18.919177 2737 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 13:23:18.919358 kubelet[2737]: I1216 13:23:18.919346 2737 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:23:18.919622 kubelet[2737]: I1216 13:23:18.919611 2737 kubelet.go:446] "Attempting to sync node with API server" Dec 16 13:23:18.920474 kubelet[2737]: I1216 13:23:18.920453 2737 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 13:23:18.920525 kubelet[2737]: I1216 13:23:18.920499 2737 kubelet.go:352] "Adding apiserver pod source" Dec 16 13:23:18.920525 kubelet[2737]: I1216 13:23:18.920511 2737 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 13:23:18.923829 kubelet[2737]: I1216 13:23:18.922089 2737 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 13:23:18.923829 kubelet[2737]: I1216 13:23:18.922746 2737 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 13:23:18.923829 kubelet[2737]: I1216 13:23:18.923503 2737 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 13:23:18.923829 kubelet[2737]: I1216 13:23:18.923550 2737 server.go:1287] "Started kubelet" Dec 16 13:23:18.926370 kubelet[2737]: I1216 13:23:18.926350 2737 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 13:23:18.939107 kubelet[2737]: I1216 13:23:18.939055 2737 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 13:23:18.942510 kubelet[2737]: I1216 13:23:18.942481 2737 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 13:23:18.942929 kubelet[2737]: I1216 13:23:18.942864 2737 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 13:23:18.943611 kubelet[2737]: I1216 13:23:18.943588 2737 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 13:23:18.944497 kubelet[2737]: I1216 13:23:18.944481 2737 server.go:479] "Adding debug handlers to kubelet server" Dec 16 13:23:18.946309 kubelet[2737]: E1216 13:23:18.946291 2737 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-100-113\" not found" Dec 16 13:23:18.951137 kubelet[2737]: I1216 13:23:18.950638 2737 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 13:23:18.957105 kubelet[2737]: I1216 13:23:18.954620 2737 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 13:23:18.958068 kubelet[2737]: I1216 13:23:18.954827 2737 reconciler.go:26] "Reconciler: start to sync state" Dec 16 13:23:18.958962 kubelet[2737]: I1216 13:23:18.958940 2737 factory.go:221] Registration of the systemd container factory successfully Dec 16 13:23:18.960229 kubelet[2737]: I1216 13:23:18.960187 2737 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 13:23:18.963900 kubelet[2737]: I1216 13:23:18.963862 2737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 13:23:18.966698 kubelet[2737]: E1216 13:23:18.965680 2737 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 13:23:18.968117 kubelet[2737]: I1216 13:23:18.968097 2737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 13:23:18.968246 kubelet[2737]: I1216 13:23:18.968235 2737 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 13:23:18.969096 kubelet[2737]: I1216 13:23:18.969075 2737 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 13:23:18.969199 kubelet[2737]: I1216 13:23:18.969188 2737 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 13:23:18.969316 kubelet[2737]: E1216 13:23:18.969298 2737 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 13:23:18.970859 kubelet[2737]: I1216 13:23:18.970702 2737 factory.go:221] Registration of the containerd container factory successfully Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.053888 2737 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.053906 2737 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.053932 2737 state_mem.go:36] "Initialized new in-memory state store" Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.054154 2737 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.054166 2737 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.054195 2737 policy_none.go:49] "None policy: Start" Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.054220 2737 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.054244 2737 state_mem.go:35] "Initializing new in-memory state store" Dec 16 13:23:19.054998 kubelet[2737]: I1216 13:23:19.054361 2737 state_mem.go:75] "Updated machine memory state" Dec 16 13:23:19.063305 kubelet[2737]: I1216 13:23:19.062831 2737 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 13:23:19.063305 kubelet[2737]: I1216 13:23:19.063063 2737 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 13:23:19.063305 kubelet[2737]: I1216 13:23:19.063075 2737 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 13:23:19.063469 kubelet[2737]: I1216 13:23:19.063440 2737 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 13:23:19.068266 kubelet[2737]: E1216 13:23:19.068221 2737 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 13:23:19.071988 kubelet[2737]: I1216 13:23:19.070925 2737 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-100-113" Dec 16 13:23:19.071988 kubelet[2737]: I1216 13:23:19.071504 2737 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:19.072296 kubelet[2737]: I1216 13:23:19.072275 2737 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:19.091937 kubelet[2737]: E1216 13:23:19.091886 2737 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-100-113\" already exists" pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:19.159965 kubelet[2737]: I1216 13:23:19.159351 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f692feef20858866ab99bf6f7f9ff30c-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-100-113\" (UID: \"f692feef20858866ab99bf6f7f9ff30c\") " pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:19.159965 kubelet[2737]: I1216 13:23:19.159392 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-flexvolume-dir\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:19.159965 kubelet[2737]: I1216 13:23:19.159412 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-k8s-certs\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:19.159965 kubelet[2737]: I1216 13:23:19.159426 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f692feef20858866ab99bf6f7f9ff30c-k8s-certs\") pod \"kube-apiserver-172-236-100-113\" (UID: \"f692feef20858866ab99bf6f7f9ff30c\") " pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:19.159965 kubelet[2737]: I1216 13:23:19.159440 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-ca-certs\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:19.160317 kubelet[2737]: I1216 13:23:19.159457 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-kubeconfig\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:19.160317 kubelet[2737]: I1216 13:23:19.159470 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f508cc8aaafe8901dd5e2eb3bdccf33f-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-100-113\" (UID: \"f508cc8aaafe8901dd5e2eb3bdccf33f\") " pod="kube-system/kube-controller-manager-172-236-100-113" Dec 16 13:23:19.160317 kubelet[2737]: I1216 13:23:19.159484 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cc644a09590502aebf2d35a3ce80ac9-kubeconfig\") pod \"kube-scheduler-172-236-100-113\" (UID: \"3cc644a09590502aebf2d35a3ce80ac9\") " pod="kube-system/kube-scheduler-172-236-100-113" Dec 16 13:23:19.160317 kubelet[2737]: I1216 13:23:19.159499 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f692feef20858866ab99bf6f7f9ff30c-ca-certs\") pod \"kube-apiserver-172-236-100-113\" (UID: \"f692feef20858866ab99bf6f7f9ff30c\") " pod="kube-system/kube-apiserver-172-236-100-113" Dec 16 13:23:19.196904 kubelet[2737]: I1216 13:23:19.196540 2737 kubelet_node_status.go:75] "Attempting to register node" node="172-236-100-113" Dec 16 13:23:19.227770 kubelet[2737]: I1216 13:23:19.212418 2737 kubelet_node_status.go:124] "Node was previously registered" node="172-236-100-113" Dec 16 13:23:19.227770 kubelet[2737]: I1216 13:23:19.212545 2737 kubelet_node_status.go:78] "Successfully registered node" node="172-236-100-113" Dec 16 13:23:19.393788 kubelet[2737]: E1216 13:23:19.393215 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:19.393969 kubelet[2737]: E1216 13:23:19.393943 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:19.394088 kubelet[2737]: E1216 13:23:19.394074 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:19.922432 kubelet[2737]: I1216 13:23:19.921905 2737 apiserver.go:52] "Watching apiserver" Dec 16 13:23:19.963623 kubelet[2737]: I1216 13:23:19.963567 2737 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 13:23:20.018210 kubelet[2737]: I1216 13:23:20.018173 2737 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-100-113" Dec 16 13:23:20.018995 kubelet[2737]: E1216 13:23:20.018963 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:20.019771 kubelet[2737]: E1216 13:23:20.019743 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:20.029670 kubelet[2737]: E1216 13:23:20.029631 2737 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-100-113\" already exists" pod="kube-system/kube-scheduler-172-236-100-113" Dec 16 13:23:20.029826 kubelet[2737]: E1216 13:23:20.029801 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:20.114113 kubelet[2737]: I1216 13:23:20.096117 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-100-113" podStartSLOduration=2.096033185 podStartE2EDuration="2.096033185s" podCreationTimestamp="2025-12-16 13:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:23:20.095538009 +0000 UTC m=+1.391222790" watchObservedRunningTime="2025-12-16 13:23:20.096033185 +0000 UTC m=+1.391717966" Dec 16 13:23:20.114113 kubelet[2737]: I1216 13:23:20.096269 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-100-113" podStartSLOduration=1.096260132 podStartE2EDuration="1.096260132s" podCreationTimestamp="2025-12-16 13:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:23:20.077208912 +0000 UTC m=+1.372893703" watchObservedRunningTime="2025-12-16 13:23:20.096260132 +0000 UTC m=+1.391944933" Dec 16 13:23:20.893768 sudo[1787]: pam_unix(sudo:session): session closed for user root Dec 16 13:23:20.951061 sshd[1786]: Connection closed by 139.178.89.65 port 39718 Dec 16 13:23:20.953118 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Dec 16 13:23:20.960491 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Dec 16 13:23:20.961896 systemd[1]: sshd@6-172.236.100.113:22-139.178.89.65:39718.service: Deactivated successfully. Dec 16 13:23:20.965815 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 13:23:20.966195 systemd[1]: session-7.scope: Consumed 6.605s CPU time, 233.3M memory peak. Dec 16 13:23:20.968654 systemd-logind[1540]: Removed session 7. Dec 16 13:23:21.021053 kubelet[2737]: E1216 13:23:21.020395 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:21.021053 kubelet[2737]: E1216 13:23:21.020520 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:22.033223 kubelet[2737]: E1216 13:23:22.033171 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:22.053656 kubelet[2737]: I1216 13:23:22.053543 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-100-113" podStartSLOduration=3.053491832 podStartE2EDuration="3.053491832s" podCreationTimestamp="2025-12-16 13:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:23:20.111889875 +0000 UTC m=+1.407574656" watchObservedRunningTime="2025-12-16 13:23:22.053491832 +0000 UTC m=+3.349176623" Dec 16 13:23:22.864178 kubelet[2737]: I1216 13:23:22.864103 2737 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 13:23:22.867210 containerd[1559]: time="2025-12-16T13:23:22.866954320Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 13:23:22.867767 kubelet[2737]: I1216 13:23:22.867724 2737 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 13:23:23.030347 kubelet[2737]: E1216 13:23:23.030285 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:23.483825 kubelet[2737]: W1216 13:23:23.483503 2737 reflector.go:569] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:172-236-100-113" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node '172-236-100-113' and this object Dec 16 13:23:23.483825 kubelet[2737]: E1216 13:23:23.483647 2737 reflector.go:166] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-flannel-cfg\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-flannel-cfg\" is forbidden: User \"system:node:172-236-100-113\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node '172-236-100-113' and this object" logger="UnhandledError" Dec 16 13:23:23.483825 kubelet[2737]: W1216 13:23:23.483737 2737 reflector.go:569] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172-236-100-113" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node '172-236-100-113' and this object Dec 16 13:23:23.483825 kubelet[2737]: E1216 13:23:23.483752 2737 reflector.go:166] "Unhandled Error" err="object-\"kube-flannel\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172-236-100-113\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-flannel\": no relationship found between node '172-236-100-113' and this object" logger="UnhandledError" Dec 16 13:23:23.486930 systemd[1]: Created slice kubepods-besteffort-pod8d107566_b7b2_4643_9882_3901ca354e15.slice - libcontainer container kubepods-besteffort-pod8d107566_b7b2_4643_9882_3901ca354e15.slice. Dec 16 13:23:23.499331 systemd[1]: Created slice kubepods-burstable-podf2e7602b_7a0d_45cf_972b_3637e628b790.slice - libcontainer container kubepods-burstable-podf2e7602b_7a0d_45cf_972b_3637e628b790.slice. Dec 16 13:23:23.572398 kubelet[2737]: I1216 13:23:23.572306 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d107566-b7b2-4643-9882-3901ca354e15-kube-proxy\") pod \"kube-proxy-pwnwv\" (UID: \"8d107566-b7b2-4643-9882-3901ca354e15\") " pod="kube-system/kube-proxy-pwnwv" Dec 16 13:23:23.572398 kubelet[2737]: I1216 13:23:23.572364 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2tt\" (UniqueName: \"kubernetes.io/projected/f2e7602b-7a0d-45cf-972b-3637e628b790-kube-api-access-bw2tt\") pod \"kube-flannel-ds-xtr9r\" (UID: \"f2e7602b-7a0d-45cf-972b-3637e628b790\") " pod="kube-flannel/kube-flannel-ds-xtr9r" Dec 16 13:23:23.572589 kubelet[2737]: I1216 13:23:23.572412 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2e7602b-7a0d-45cf-972b-3637e628b790-xtables-lock\") pod \"kube-flannel-ds-xtr9r\" (UID: \"f2e7602b-7a0d-45cf-972b-3637e628b790\") " pod="kube-flannel/kube-flannel-ds-xtr9r" Dec 16 13:23:23.572589 kubelet[2737]: I1216 13:23:23.572439 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d107566-b7b2-4643-9882-3901ca354e15-lib-modules\") pod \"kube-proxy-pwnwv\" (UID: \"8d107566-b7b2-4643-9882-3901ca354e15\") " pod="kube-system/kube-proxy-pwnwv" Dec 16 13:23:23.572589 kubelet[2737]: I1216 13:23:23.572460 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/f2e7602b-7a0d-45cf-972b-3637e628b790-flannel-cfg\") pod \"kube-flannel-ds-xtr9r\" (UID: \"f2e7602b-7a0d-45cf-972b-3637e628b790\") " pod="kube-flannel/kube-flannel-ds-xtr9r" Dec 16 13:23:23.572589 kubelet[2737]: I1216 13:23:23.572503 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d107566-b7b2-4643-9882-3901ca354e15-xtables-lock\") pod \"kube-proxy-pwnwv\" (UID: \"8d107566-b7b2-4643-9882-3901ca354e15\") " pod="kube-system/kube-proxy-pwnwv" Dec 16 13:23:23.572589 kubelet[2737]: I1216 13:23:23.572537 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmp2\" (UniqueName: \"kubernetes.io/projected/8d107566-b7b2-4643-9882-3901ca354e15-kube-api-access-hvmp2\") pod \"kube-proxy-pwnwv\" (UID: \"8d107566-b7b2-4643-9882-3901ca354e15\") " pod="kube-system/kube-proxy-pwnwv" Dec 16 13:23:23.572798 kubelet[2737]: I1216 13:23:23.572556 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f2e7602b-7a0d-45cf-972b-3637e628b790-run\") pod \"kube-flannel-ds-xtr9r\" (UID: \"f2e7602b-7a0d-45cf-972b-3637e628b790\") " pod="kube-flannel/kube-flannel-ds-xtr9r" Dec 16 13:23:23.572798 kubelet[2737]: I1216 13:23:23.572593 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/f2e7602b-7a0d-45cf-972b-3637e628b790-cni-plugin\") pod \"kube-flannel-ds-xtr9r\" (UID: \"f2e7602b-7a0d-45cf-972b-3637e628b790\") " pod="kube-flannel/kube-flannel-ds-xtr9r" Dec 16 13:23:23.572798 kubelet[2737]: I1216 13:23:23.572610 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/f2e7602b-7a0d-45cf-972b-3637e628b790-cni\") pod \"kube-flannel-ds-xtr9r\" (UID: \"f2e7602b-7a0d-45cf-972b-3637e628b790\") " pod="kube-flannel/kube-flannel-ds-xtr9r" Dec 16 13:23:23.811178 kubelet[2737]: E1216 13:23:23.810976 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:23.814561 containerd[1559]: time="2025-12-16T13:23:23.814514711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwnwv,Uid:8d107566-b7b2-4643-9882-3901ca354e15,Namespace:kube-system,Attempt:0,}" Dec 16 13:23:23.882087 containerd[1559]: time="2025-12-16T13:23:23.881592064Z" level=info msg="connecting to shim f2a651a1f7ca06ccdaea9af72198ca5ceee164c9c81291c3d356ffa38ad6b3d9" address="unix:///run/containerd/s/52406759ae76529377e6bd42e9279316b445d5fb9e1ad2b5125a035835689bd1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:23:23.996526 systemd[1]: Started cri-containerd-f2a651a1f7ca06ccdaea9af72198ca5ceee164c9c81291c3d356ffa38ad6b3d9.scope - libcontainer container f2a651a1f7ca06ccdaea9af72198ca5ceee164c9c81291c3d356ffa38ad6b3d9. Dec 16 13:23:24.110494 containerd[1559]: time="2025-12-16T13:23:24.110379480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pwnwv,Uid:8d107566-b7b2-4643-9882-3901ca354e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2a651a1f7ca06ccdaea9af72198ca5ceee164c9c81291c3d356ffa38ad6b3d9\"" Dec 16 13:23:24.113395 kubelet[2737]: E1216 13:23:24.112142 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:24.116260 containerd[1559]: time="2025-12-16T13:23:24.116206620Z" level=info msg="CreateContainer within sandbox \"f2a651a1f7ca06ccdaea9af72198ca5ceee164c9c81291c3d356ffa38ad6b3d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 13:23:24.133523 containerd[1559]: time="2025-12-16T13:23:24.133485044Z" level=info msg="Container 79f0b29ebdca5c79a50621eabd5bbfd271113a727eae06e718316d40db4f9ae5: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:24.138175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575726658.mount: Deactivated successfully. Dec 16 13:23:24.142405 containerd[1559]: time="2025-12-16T13:23:24.142361697Z" level=info msg="CreateContainer within sandbox \"f2a651a1f7ca06ccdaea9af72198ca5ceee164c9c81291c3d356ffa38ad6b3d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79f0b29ebdca5c79a50621eabd5bbfd271113a727eae06e718316d40db4f9ae5\"" Dec 16 13:23:24.143550 containerd[1559]: time="2025-12-16T13:23:24.143468634Z" level=info msg="StartContainer for \"79f0b29ebdca5c79a50621eabd5bbfd271113a727eae06e718316d40db4f9ae5\"" Dec 16 13:23:24.146966 containerd[1559]: time="2025-12-16T13:23:24.146903707Z" level=info msg="connecting to shim 79f0b29ebdca5c79a50621eabd5bbfd271113a727eae06e718316d40db4f9ae5" address="unix:///run/containerd/s/52406759ae76529377e6bd42e9279316b445d5fb9e1ad2b5125a035835689bd1" protocol=ttrpc version=3 Dec 16 13:23:24.172182 systemd[1]: Started cri-containerd-79f0b29ebdca5c79a50621eabd5bbfd271113a727eae06e718316d40db4f9ae5.scope - libcontainer container 79f0b29ebdca5c79a50621eabd5bbfd271113a727eae06e718316d40db4f9ae5. Dec 16 13:23:24.259143 containerd[1559]: time="2025-12-16T13:23:24.259081668Z" level=info msg="StartContainer for \"79f0b29ebdca5c79a50621eabd5bbfd271113a727eae06e718316d40db4f9ae5\" returns successfully" Dec 16 13:23:24.712582 kubelet[2737]: E1216 13:23:24.712527 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:24.714089 containerd[1559]: time="2025-12-16T13:23:24.714011262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-xtr9r,Uid:f2e7602b-7a0d-45cf-972b-3637e628b790,Namespace:kube-flannel,Attempt:0,}" Dec 16 13:23:24.746004 containerd[1559]: time="2025-12-16T13:23:24.745931997Z" level=info msg="connecting to shim 1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7" address="unix:///run/containerd/s/5cf469367f430f80758b613d24987e1a3dede9b1bb4bc51cf3463a383aa8f6a0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:23:24.847237 systemd[1]: Started cri-containerd-1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7.scope - libcontainer container 1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7. Dec 16 13:23:24.979082 containerd[1559]: time="2025-12-16T13:23:24.978991448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-xtr9r,Uid:f2e7602b-7a0d-45cf-972b-3637e628b790,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7\"" Dec 16 13:23:24.988270 kubelet[2737]: E1216 13:23:24.988216 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:24.993491 containerd[1559]: time="2025-12-16T13:23:24.993429885Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 16 13:23:25.006166 systemd-resolved[1434]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 172.232.0.13. Dec 16 13:23:25.052344 kubelet[2737]: E1216 13:23:25.051242 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:25.080946 kubelet[2737]: I1216 13:23:25.080778 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pwnwv" podStartSLOduration=2.080749572 podStartE2EDuration="2.080749572s" podCreationTimestamp="2025-12-16 13:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:23:25.078662675 +0000 UTC m=+6.374347466" watchObservedRunningTime="2025-12-16 13:23:25.080749572 +0000 UTC m=+6.376434353" Dec 16 13:23:25.272698 kubelet[2737]: E1216 13:23:25.272232 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:25.763879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135759376.mount: Deactivated successfully. Dec 16 13:23:25.821921 containerd[1559]: time="2025-12-16T13:23:25.821829384Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:25.823168 containerd[1559]: time="2025-12-16T13:23:25.822724714Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Dec 16 13:23:25.823965 containerd[1559]: time="2025-12-16T13:23:25.823940302Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:25.827156 containerd[1559]: time="2025-12-16T13:23:25.827131043Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:25.828848 containerd[1559]: time="2025-12-16T13:23:25.828793151Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 835.292933ms" Dec 16 13:23:25.828848 containerd[1559]: time="2025-12-16T13:23:25.828836072Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 16 13:23:25.833775 containerd[1559]: time="2025-12-16T13:23:25.833717211Z" level=info msg="CreateContainer within sandbox \"1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 16 13:23:25.840606 containerd[1559]: time="2025-12-16T13:23:25.840573156Z" level=info msg="Container c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:25.849381 containerd[1559]: time="2025-12-16T13:23:25.849336363Z" level=info msg="CreateContainer within sandbox \"1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a\"" Dec 16 13:23:25.850067 containerd[1559]: time="2025-12-16T13:23:25.850048980Z" level=info msg="StartContainer for \"c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a\"" Dec 16 13:23:25.851285 containerd[1559]: time="2025-12-16T13:23:25.851234936Z" level=info msg="connecting to shim c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a" address="unix:///run/containerd/s/5cf469367f430f80758b613d24987e1a3dede9b1bb4bc51cf3463a383aa8f6a0" protocol=ttrpc version=3 Dec 16 13:23:25.941348 systemd[1]: Started cri-containerd-c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a.scope - libcontainer container c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a. Dec 16 13:23:26.003047 containerd[1559]: time="2025-12-16T13:23:26.002963641Z" level=info msg="StartContainer for \"c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a\" returns successfully" Dec 16 13:23:26.006644 systemd[1]: cri-containerd-c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a.scope: Deactivated successfully. Dec 16 13:23:26.010721 containerd[1559]: time="2025-12-16T13:23:26.010625683Z" level=info msg="received container exit event container_id:\"c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a\" id:\"c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a\" pid:3075 exited_at:{seconds:1765891406 nanos:9620001}" Dec 16 13:23:26.058692 kubelet[2737]: E1216 13:23:26.058451 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:26.059153 kubelet[2737]: E1216 13:23:26.058720 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:26.067277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8006e8a29ec7b6de36fcc68ee8e8f0ee3d4c07262fdfd7d45582d865070b73a-rootfs.mount: Deactivated successfully. Dec 16 13:23:27.064058 kubelet[2737]: E1216 13:23:27.063595 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:27.064058 kubelet[2737]: E1216 13:23:27.063710 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:27.067048 containerd[1559]: time="2025-12-16T13:23:27.066225751Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 16 13:23:28.151879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065517534.mount: Deactivated successfully. Dec 16 13:23:29.098371 kubelet[2737]: E1216 13:23:29.098181 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:29.739061 containerd[1559]: time="2025-12-16T13:23:29.738443402Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:29.739997 containerd[1559]: time="2025-12-16T13:23:29.739922678Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 16 13:23:29.740617 containerd[1559]: time="2025-12-16T13:23:29.740560459Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:29.763415 containerd[1559]: time="2025-12-16T13:23:29.762364971Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 13:23:29.763415 containerd[1559]: time="2025-12-16T13:23:29.763202136Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.696881964s" Dec 16 13:23:29.763415 containerd[1559]: time="2025-12-16T13:23:29.763282227Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 16 13:23:29.793978 containerd[1559]: time="2025-12-16T13:23:29.793903923Z" level=info msg="CreateContainer within sandbox \"1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 13:23:29.806657 containerd[1559]: time="2025-12-16T13:23:29.806601995Z" level=info msg="Container 8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:29.811178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110851958.mount: Deactivated successfully. Dec 16 13:23:29.828038 containerd[1559]: time="2025-12-16T13:23:29.826888051Z" level=info msg="CreateContainer within sandbox \"1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba\"" Dec 16 13:23:29.830987 containerd[1559]: time="2025-12-16T13:23:29.829406015Z" level=info msg="StartContainer for \"8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba\"" Dec 16 13:23:29.831532 containerd[1559]: time="2025-12-16T13:23:29.831492152Z" level=info msg="connecting to shim 8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba" address="unix:///run/containerd/s/5cf469367f430f80758b613d24987e1a3dede9b1bb4bc51cf3463a383aa8f6a0" protocol=ttrpc version=3 Dec 16 13:23:29.918639 systemd[1]: Started cri-containerd-8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba.scope - libcontainer container 8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba. Dec 16 13:23:30.013757 systemd[1]: cri-containerd-8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba.scope: Deactivated successfully. Dec 16 13:23:30.018586 containerd[1559]: time="2025-12-16T13:23:30.018534388Z" level=info msg="received container exit event container_id:\"8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba\" id:\"8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba\" pid:3147 exited_at:{seconds:1765891410 nanos:17893378}" Dec 16 13:23:30.018707 containerd[1559]: time="2025-12-16T13:23:30.018681761Z" level=info msg="StartContainer for \"8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba\" returns successfully" Dec 16 13:23:30.031229 kubelet[2737]: I1216 13:23:30.031134 2737 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 13:23:30.072089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b6b18fad3091e30bee1bdb1638965cdffc7e3c15b20b8eea93f9ce6346aabba-rootfs.mount: Deactivated successfully. Dec 16 13:23:30.100130 kubelet[2737]: E1216 13:23:30.100068 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:30.100899 kubelet[2737]: E1216 13:23:30.100497 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:30.139674 systemd[1]: Created slice kubepods-burstable-podb774fb61_dea6_4e70_90e8_950a5055e70a.slice - libcontainer container kubepods-burstable-podb774fb61_dea6_4e70_90e8_950a5055e70a.slice. Dec 16 13:23:30.162465 systemd[1]: Created slice kubepods-burstable-pod9550dd91_bbf3_40ef_9d12_df0e158b04da.slice - libcontainer container kubepods-burstable-pod9550dd91_bbf3_40ef_9d12_df0e158b04da.slice. Dec 16 13:23:30.303223 kubelet[2737]: I1216 13:23:30.302613 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9550dd91-bbf3-40ef-9d12-df0e158b04da-config-volume\") pod \"coredns-668d6bf9bc-9w24l\" (UID: \"9550dd91-bbf3-40ef-9d12-df0e158b04da\") " pod="kube-system/coredns-668d6bf9bc-9w24l" Dec 16 13:23:30.303223 kubelet[2737]: I1216 13:23:30.302656 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkkg\" (UniqueName: \"kubernetes.io/projected/9550dd91-bbf3-40ef-9d12-df0e158b04da-kube-api-access-mmkkg\") pod \"coredns-668d6bf9bc-9w24l\" (UID: \"9550dd91-bbf3-40ef-9d12-df0e158b04da\") " pod="kube-system/coredns-668d6bf9bc-9w24l" Dec 16 13:23:30.303223 kubelet[2737]: I1216 13:23:30.302688 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b774fb61-dea6-4e70-90e8-950a5055e70a-config-volume\") pod \"coredns-668d6bf9bc-f676k\" (UID: \"b774fb61-dea6-4e70-90e8-950a5055e70a\") " pod="kube-system/coredns-668d6bf9bc-f676k" Dec 16 13:23:30.303223 kubelet[2737]: I1216 13:23:30.302705 2737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbcnh\" (UniqueName: \"kubernetes.io/projected/b774fb61-dea6-4e70-90e8-950a5055e70a-kube-api-access-mbcnh\") pod \"coredns-668d6bf9bc-f676k\" (UID: \"b774fb61-dea6-4e70-90e8-950a5055e70a\") " pod="kube-system/coredns-668d6bf9bc-f676k" Dec 16 13:23:30.448219 kubelet[2737]: E1216 13:23:30.448159 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:30.449147 containerd[1559]: time="2025-12-16T13:23:30.449103961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f676k,Uid:b774fb61-dea6-4e70-90e8-950a5055e70a,Namespace:kube-system,Attempt:0,}" Dec 16 13:23:30.469699 kubelet[2737]: E1216 13:23:30.468772 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:30.472674 containerd[1559]: time="2025-12-16T13:23:30.472373513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9w24l,Uid:9550dd91-bbf3-40ef-9d12-df0e158b04da,Namespace:kube-system,Attempt:0,}" Dec 16 13:23:30.522797 containerd[1559]: time="2025-12-16T13:23:30.522690871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9w24l,Uid:9550dd91-bbf3-40ef-9d12-df0e158b04da,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f90a1759a01ff2f77c3087746b78b4d483d9a918438589943cbeda9a6dfe382\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 13:23:30.523193 kubelet[2737]: E1216 13:23:30.523136 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f90a1759a01ff2f77c3087746b78b4d483d9a918438589943cbeda9a6dfe382\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 13:23:30.523290 kubelet[2737]: E1216 13:23:30.523236 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f90a1759a01ff2f77c3087746b78b4d483d9a918438589943cbeda9a6dfe382\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9w24l" Dec 16 13:23:30.523290 kubelet[2737]: E1216 13:23:30.523272 2737 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f90a1759a01ff2f77c3087746b78b4d483d9a918438589943cbeda9a6dfe382\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-9w24l" Dec 16 13:23:30.523462 kubelet[2737]: E1216 13:23:30.523376 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9w24l_kube-system(9550dd91-bbf3-40ef-9d12-df0e158b04da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9w24l_kube-system(9550dd91-bbf3-40ef-9d12-df0e158b04da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f90a1759a01ff2f77c3087746b78b4d483d9a918438589943cbeda9a6dfe382\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-9w24l" podUID="9550dd91-bbf3-40ef-9d12-df0e158b04da" Dec 16 13:23:30.527227 containerd[1559]: time="2025-12-16T13:23:30.527187846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f676k,Uid:b774fb61-dea6-4e70-90e8-950a5055e70a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c58dfa488c374ceaba444e1c2806869f410eaa32f5323ae3e7ee9e14f21e873\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 13:23:30.527441 kubelet[2737]: E1216 13:23:30.527381 2737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c58dfa488c374ceaba444e1c2806869f410eaa32f5323ae3e7ee9e14f21e873\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 13:23:30.527503 kubelet[2737]: E1216 13:23:30.527458 2737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c58dfa488c374ceaba444e1c2806869f410eaa32f5323ae3e7ee9e14f21e873\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-f676k" Dec 16 13:23:30.527503 kubelet[2737]: E1216 13:23:30.527499 2737 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c58dfa488c374ceaba444e1c2806869f410eaa32f5323ae3e7ee9e14f21e873\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-f676k" Dec 16 13:23:30.527596 kubelet[2737]: E1216 13:23:30.527541 2737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-f676k_kube-system(b774fb61-dea6-4e70-90e8-950a5055e70a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-f676k_kube-system(b774fb61-dea6-4e70-90e8-950a5055e70a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c58dfa488c374ceaba444e1c2806869f410eaa32f5323ae3e7ee9e14f21e873\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-f676k" podUID="b774fb61-dea6-4e70-90e8-950a5055e70a" Dec 16 13:23:31.106344 kubelet[2737]: E1216 13:23:31.106304 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:31.115811 containerd[1559]: time="2025-12-16T13:23:31.115312901Z" level=info msg="CreateContainer within sandbox \"1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 16 13:23:31.133129 containerd[1559]: time="2025-12-16T13:23:31.132829821Z" level=info msg="Container 35d10eb0e24ae79ac879e62ac343b708ad4a19b097e12d3ecd4cd7001736292b: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:31.141514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685399847.mount: Deactivated successfully. Dec 16 13:23:31.144518 containerd[1559]: time="2025-12-16T13:23:31.144460721Z" level=info msg="CreateContainer within sandbox \"1d336f4d9a838789d8e6fb584ecdb9d113b8e6b43b0b10f17bda4159b73486d7\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"35d10eb0e24ae79ac879e62ac343b708ad4a19b097e12d3ecd4cd7001736292b\"" Dec 16 13:23:31.145805 containerd[1559]: time="2025-12-16T13:23:31.145679841Z" level=info msg="StartContainer for \"35d10eb0e24ae79ac879e62ac343b708ad4a19b097e12d3ecd4cd7001736292b\"" Dec 16 13:23:31.146798 containerd[1559]: time="2025-12-16T13:23:31.146768527Z" level=info msg="connecting to shim 35d10eb0e24ae79ac879e62ac343b708ad4a19b097e12d3ecd4cd7001736292b" address="unix:///run/containerd/s/5cf469367f430f80758b613d24987e1a3dede9b1bb4bc51cf3463a383aa8f6a0" protocol=ttrpc version=3 Dec 16 13:23:31.291211 systemd[1]: Started cri-containerd-35d10eb0e24ae79ac879e62ac343b708ad4a19b097e12d3ecd4cd7001736292b.scope - libcontainer container 35d10eb0e24ae79ac879e62ac343b708ad4a19b097e12d3ecd4cd7001736292b. Dec 16 13:23:31.416669 containerd[1559]: time="2025-12-16T13:23:31.416478935Z" level=info msg="StartContainer for \"35d10eb0e24ae79ac879e62ac343b708ad4a19b097e12d3ecd4cd7001736292b\" returns successfully" Dec 16 13:23:32.109621 kubelet[2737]: E1216 13:23:32.109589 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:32.122650 kubelet[2737]: I1216 13:23:32.122573 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-xtr9r" podStartSLOduration=4.348306584 podStartE2EDuration="9.122516686s" podCreationTimestamp="2025-12-16 13:23:23 +0000 UTC" firstStartedPulling="2025-12-16 13:23:24.991350805 +0000 UTC m=+6.287035586" lastFinishedPulling="2025-12-16 13:23:29.765560887 +0000 UTC m=+11.061245688" observedRunningTime="2025-12-16 13:23:32.121826796 +0000 UTC m=+13.417511597" watchObservedRunningTime="2025-12-16 13:23:32.122516686 +0000 UTC m=+13.418201507" Dec 16 13:23:32.563487 systemd-networkd[1433]: flannel.1: Link UP Dec 16 13:23:32.563511 systemd-networkd[1433]: flannel.1: Gained carrier Dec 16 13:23:33.112450 kubelet[2737]: E1216 13:23:33.111974 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:34.028255 systemd-networkd[1433]: flannel.1: Gained IPv6LL Dec 16 13:23:41.970815 kubelet[2737]: E1216 13:23:41.970697 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:41.972910 containerd[1559]: time="2025-12-16T13:23:41.972830226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f676k,Uid:b774fb61-dea6-4e70-90e8-950a5055e70a,Namespace:kube-system,Attempt:0,}" Dec 16 13:23:42.007150 systemd-networkd[1433]: cni0: Link UP Dec 16 13:23:42.007159 systemd-networkd[1433]: cni0: Gained carrier Dec 16 13:23:42.012301 systemd-networkd[1433]: cni0: Lost carrier Dec 16 13:23:42.042078 systemd-networkd[1433]: vethab8f9f1c: Link UP Dec 16 13:23:42.048611 kernel: cni0: port 1(vethab8f9f1c) entered blocking state Dec 16 13:23:42.048801 kernel: cni0: port 1(vethab8f9f1c) entered disabled state Dec 16 13:23:42.052066 kernel: vethab8f9f1c: entered allmulticast mode Dec 16 13:23:42.052350 kernel: vethab8f9f1c: entered promiscuous mode Dec 16 13:23:42.067404 kernel: cni0: port 1(vethab8f9f1c) entered blocking state Dec 16 13:23:42.067481 kernel: cni0: port 1(vethab8f9f1c) entered forwarding state Dec 16 13:23:42.067452 systemd-networkd[1433]: vethab8f9f1c: Gained carrier Dec 16 13:23:42.068608 systemd-networkd[1433]: cni0: Gained carrier Dec 16 13:23:42.080855 containerd[1559]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 16 13:23:42.080855 containerd[1559]: delegateAdd: netconf sent to delegate plugin: Dec 16 13:23:42.131210 containerd[1559]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T13:23:42.130878830Z" level=info msg="connecting to shim 2f8c22af768c9318448e4d255b31e999081eb6cb3e9b82fd4b866668d0de9748" address="unix:///run/containerd/s/5e2fdd1b26fc02920dd7b95f32488af575349663aa99f87af439a8e7eb45582a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:23:42.191236 systemd[1]: Started cri-containerd-2f8c22af768c9318448e4d255b31e999081eb6cb3e9b82fd4b866668d0de9748.scope - libcontainer container 2f8c22af768c9318448e4d255b31e999081eb6cb3e9b82fd4b866668d0de9748. Dec 16 13:23:42.281146 containerd[1559]: time="2025-12-16T13:23:42.280997629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f676k,Uid:b774fb61-dea6-4e70-90e8-950a5055e70a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f8c22af768c9318448e4d255b31e999081eb6cb3e9b82fd4b866668d0de9748\"" Dec 16 13:23:42.283586 kubelet[2737]: E1216 13:23:42.282465 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:42.286191 containerd[1559]: time="2025-12-16T13:23:42.285902888Z" level=info msg="CreateContainer within sandbox \"2f8c22af768c9318448e4d255b31e999081eb6cb3e9b82fd4b866668d0de9748\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:23:42.301558 containerd[1559]: time="2025-12-16T13:23:42.298520667Z" level=info msg="Container f9e6cc59ceedc738c98f68d8c5441f9e709833401e05ab9330617db10232bf38: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:42.309594 containerd[1559]: time="2025-12-16T13:23:42.309553134Z" level=info msg="CreateContainer within sandbox \"2f8c22af768c9318448e4d255b31e999081eb6cb3e9b82fd4b866668d0de9748\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9e6cc59ceedc738c98f68d8c5441f9e709833401e05ab9330617db10232bf38\"" Dec 16 13:23:42.313233 containerd[1559]: time="2025-12-16T13:23:42.312714558Z" level=info msg="StartContainer for \"f9e6cc59ceedc738c98f68d8c5441f9e709833401e05ab9330617db10232bf38\"" Dec 16 13:23:42.314625 containerd[1559]: time="2025-12-16T13:23:42.314326231Z" level=info msg="connecting to shim f9e6cc59ceedc738c98f68d8c5441f9e709833401e05ab9330617db10232bf38" address="unix:///run/containerd/s/5e2fdd1b26fc02920dd7b95f32488af575349663aa99f87af439a8e7eb45582a" protocol=ttrpc version=3 Dec 16 13:23:42.334154 systemd[1]: Started cri-containerd-f9e6cc59ceedc738c98f68d8c5441f9e709833401e05ab9330617db10232bf38.scope - libcontainer container f9e6cc59ceedc738c98f68d8c5441f9e709833401e05ab9330617db10232bf38. Dec 16 13:23:42.414955 containerd[1559]: time="2025-12-16T13:23:42.414874880Z" level=info msg="StartContainer for \"f9e6cc59ceedc738c98f68d8c5441f9e709833401e05ab9330617db10232bf38\" returns successfully" Dec 16 13:23:42.971056 kubelet[2737]: E1216 13:23:42.970875 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:42.973879 containerd[1559]: time="2025-12-16T13:23:42.973147204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9w24l,Uid:9550dd91-bbf3-40ef-9d12-df0e158b04da,Namespace:kube-system,Attempt:0,}" Dec 16 13:23:42.985437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141258340.mount: Deactivated successfully. Dec 16 13:23:43.014324 systemd-networkd[1433]: vethc5794f6c: Link UP Dec 16 13:23:43.017393 kernel: cni0: port 2(vethc5794f6c) entered blocking state Dec 16 13:23:43.017478 kernel: cni0: port 2(vethc5794f6c) entered disabled state Dec 16 13:23:43.038054 kernel: vethc5794f6c: entered allmulticast mode Dec 16 13:23:43.038176 kernel: vethc5794f6c: entered promiscuous mode Dec 16 13:23:43.056793 kernel: cni0: port 2(vethc5794f6c) entered blocking state Dec 16 13:23:43.056886 kernel: cni0: port 2(vethc5794f6c) entered forwarding state Dec 16 13:23:43.057115 systemd-networkd[1433]: vethc5794f6c: Gained carrier Dec 16 13:23:43.060486 containerd[1559]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 16 13:23:43.060486 containerd[1559]: delegateAdd: netconf sent to delegate plugin: Dec 16 13:23:43.108082 containerd[1559]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T13:23:43.107426759Z" level=info msg="connecting to shim 6f28465f9924e179b61ded86bd653da23926b5f7392dbd7b96ded1ad84186bcd" address="unix:///run/containerd/s/2f5cc9aa89e90cac3be8cd62b68f486bebd99d313924eabdad7757dbfee51a5b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 13:23:43.213123 kubelet[2737]: E1216 13:23:43.164915 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:43.213123 kubelet[2737]: I1216 13:23:43.205573 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f676k" podStartSLOduration=20.205498384 podStartE2EDuration="20.205498384s" podCreationTimestamp="2025-12-16 13:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:23:43.194934116 +0000 UTC m=+24.490618897" watchObservedRunningTime="2025-12-16 13:23:43.205498384 +0000 UTC m=+24.501183165" Dec 16 13:23:43.474189 systemd[1]: Started cri-containerd-6f28465f9924e179b61ded86bd653da23926b5f7392dbd7b96ded1ad84186bcd.scope - libcontainer container 6f28465f9924e179b61ded86bd653da23926b5f7392dbd7b96ded1ad84186bcd. Dec 16 13:23:43.588457 containerd[1559]: time="2025-12-16T13:23:43.588389825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9w24l,Uid:9550dd91-bbf3-40ef-9d12-df0e158b04da,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f28465f9924e179b61ded86bd653da23926b5f7392dbd7b96ded1ad84186bcd\"" Dec 16 13:23:43.589906 kubelet[2737]: E1216 13:23:43.589584 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:43.593063 containerd[1559]: time="2025-12-16T13:23:43.593035279Z" level=info msg="CreateContainer within sandbox \"6f28465f9924e179b61ded86bd653da23926b5f7392dbd7b96ded1ad84186bcd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 13:23:43.611456 containerd[1559]: time="2025-12-16T13:23:43.609952974Z" level=info msg="Container d1ad72cf4c0da1fe37813ff077a14909d8cd0e85f10ab92c2745da9e75b2446f: CDI devices from CRI Config.CDIDevices: []" Dec 16 13:23:43.611123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114712003.mount: Deactivated successfully. Dec 16 13:23:43.617450 containerd[1559]: time="2025-12-16T13:23:43.617408479Z" level=info msg="CreateContainer within sandbox \"6f28465f9924e179b61ded86bd653da23926b5f7392dbd7b96ded1ad84186bcd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1ad72cf4c0da1fe37813ff077a14909d8cd0e85f10ab92c2745da9e75b2446f\"" Dec 16 13:23:43.618359 containerd[1559]: time="2025-12-16T13:23:43.618306285Z" level=info msg="StartContainer for \"d1ad72cf4c0da1fe37813ff077a14909d8cd0e85f10ab92c2745da9e75b2446f\"" Dec 16 13:23:43.619910 containerd[1559]: time="2025-12-16T13:23:43.619812466Z" level=info msg="connecting to shim d1ad72cf4c0da1fe37813ff077a14909d8cd0e85f10ab92c2745da9e75b2446f" address="unix:///run/containerd/s/2f5cc9aa89e90cac3be8cd62b68f486bebd99d313924eabdad7757dbfee51a5b" protocol=ttrpc version=3 Dec 16 13:23:43.629517 systemd-networkd[1433]: cni0: Gained IPv6LL Dec 16 13:23:43.646296 systemd[1]: Started cri-containerd-d1ad72cf4c0da1fe37813ff077a14909d8cd0e85f10ab92c2745da9e75b2446f.scope - libcontainer container d1ad72cf4c0da1fe37813ff077a14909d8cd0e85f10ab92c2745da9e75b2446f. Dec 16 13:23:43.759289 systemd-networkd[1433]: vethab8f9f1c: Gained IPv6LL Dec 16 13:23:44.031003 containerd[1559]: time="2025-12-16T13:23:44.030816543Z" level=info msg="StartContainer for \"d1ad72cf4c0da1fe37813ff077a14909d8cd0e85f10ab92c2745da9e75b2446f\" returns successfully" Dec 16 13:23:44.166077 kubelet[2737]: E1216 13:23:44.165864 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:44.196899 kubelet[2737]: I1216 13:23:44.196828 2737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9w24l" podStartSLOduration=21.196790618 podStartE2EDuration="21.196790618s" podCreationTimestamp="2025-12-16 13:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 13:23:44.180990398 +0000 UTC m=+25.476675179" watchObservedRunningTime="2025-12-16 13:23:44.196790618 +0000 UTC m=+25.492475399" Dec 16 13:23:45.100409 systemd-networkd[1433]: vethc5794f6c: Gained IPv6LL Dec 16 13:23:45.168976 kubelet[2737]: E1216 13:23:45.168918 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:46.170418 kubelet[2737]: E1216 13:23:46.170381 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:53.165264 kubelet[2737]: E1216 13:23:53.163885 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:23:53.190884 kubelet[2737]: E1216 13:23:53.190796 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:24:26.971122 kubelet[2737]: E1216 13:24:26.970828 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:24:37.972993 kubelet[2737]: E1216 13:24:37.972574 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:24:37.972993 kubelet[2737]: E1216 13:24:37.972890 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:24:47.970705 kubelet[2737]: E1216 13:24:47.970638 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:24:48.971481 kubelet[2737]: E1216 13:24:48.970852 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:24:55.970572 kubelet[2737]: E1216 13:24:55.970528 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:25:02.751037 systemd[1]: Started sshd@7-172.236.100.113:22-139.178.89.65:45070.service - OpenSSH per-connection server daemon (139.178.89.65:45070). Dec 16 13:25:03.123792 sshd[3952]: Accepted publickey for core from 139.178.89.65 port 45070 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:03.125366 sshd-session[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:03.137890 systemd-logind[1540]: New session 8 of user core. Dec 16 13:25:03.146295 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 13:25:03.640867 sshd[3961]: Connection closed by 139.178.89.65 port 45070 Dec 16 13:25:03.641926 sshd-session[3952]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:03.646959 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Dec 16 13:25:03.647699 systemd[1]: sshd@7-172.236.100.113:22-139.178.89.65:45070.service: Deactivated successfully. Dec 16 13:25:03.651033 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 13:25:03.658732 systemd-logind[1540]: Removed session 8. Dec 16 13:25:08.710718 systemd[1]: Started sshd@8-172.236.100.113:22-139.178.89.65:45084.service - OpenSSH per-connection server daemon (139.178.89.65:45084). Dec 16 13:25:08.973801 kubelet[2737]: E1216 13:25:08.973004 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:25:09.085970 sshd[4010]: Accepted publickey for core from 139.178.89.65 port 45084 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:09.087962 sshd-session[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:09.095547 systemd-logind[1540]: New session 9 of user core. Dec 16 13:25:09.101170 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 13:25:09.451948 sshd[4013]: Connection closed by 139.178.89.65 port 45084 Dec 16 13:25:09.453219 sshd-session[4010]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:09.459282 systemd[1]: sshd@8-172.236.100.113:22-139.178.89.65:45084.service: Deactivated successfully. Dec 16 13:25:09.461696 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 13:25:09.463765 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Dec 16 13:25:09.465319 systemd-logind[1540]: Removed session 9. Dec 16 13:25:14.520868 systemd[1]: Started sshd@9-172.236.100.113:22-139.178.89.65:60974.service - OpenSSH per-connection server daemon (139.178.89.65:60974). Dec 16 13:25:14.893331 sshd[4047]: Accepted publickey for core from 139.178.89.65 port 60974 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:14.895418 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:14.902075 systemd-logind[1540]: New session 10 of user core. Dec 16 13:25:14.908185 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 13:25:15.268939 sshd[4050]: Connection closed by 139.178.89.65 port 60974 Dec 16 13:25:15.270367 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:15.277253 systemd[1]: sshd@9-172.236.100.113:22-139.178.89.65:60974.service: Deactivated successfully. Dec 16 13:25:15.279868 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 13:25:15.281695 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Dec 16 13:25:15.283264 systemd-logind[1540]: Removed session 10. Dec 16 13:25:15.335773 systemd[1]: Started sshd@10-172.236.100.113:22-139.178.89.65:60976.service - OpenSSH per-connection server daemon (139.178.89.65:60976). Dec 16 13:25:15.687321 sshd[4063]: Accepted publickey for core from 139.178.89.65 port 60976 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:15.689343 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:15.695649 systemd-logind[1540]: New session 11 of user core. Dec 16 13:25:15.703220 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 13:25:16.132632 sshd[4066]: Connection closed by 139.178.89.65 port 60976 Dec 16 13:25:16.134456 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:16.141006 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Dec 16 13:25:16.142058 systemd[1]: sshd@10-172.236.100.113:22-139.178.89.65:60976.service: Deactivated successfully. Dec 16 13:25:16.144506 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 13:25:16.146187 systemd-logind[1540]: Removed session 11. Dec 16 13:25:16.193339 systemd[1]: Started sshd@11-172.236.100.113:22-139.178.89.65:60984.service - OpenSSH per-connection server daemon (139.178.89.65:60984). Dec 16 13:25:16.540926 sshd[4076]: Accepted publickey for core from 139.178.89.65 port 60984 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:16.541704 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:16.549005 systemd-logind[1540]: New session 12 of user core. Dec 16 13:25:16.558178 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 13:25:16.877302 sshd[4079]: Connection closed by 139.178.89.65 port 60984 Dec 16 13:25:16.879298 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:16.885119 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Dec 16 13:25:16.886436 systemd[1]: sshd@11-172.236.100.113:22-139.178.89.65:60984.service: Deactivated successfully. Dec 16 13:25:16.890187 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 13:25:16.892785 systemd-logind[1540]: Removed session 12. Dec 16 13:25:21.946465 systemd[1]: Started sshd@12-172.236.100.113:22-139.178.89.65:44134.service - OpenSSH per-connection server daemon (139.178.89.65:44134). Dec 16 13:25:22.318342 sshd[4114]: Accepted publickey for core from 139.178.89.65 port 44134 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:22.320503 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:22.326410 systemd-logind[1540]: New session 13 of user core. Dec 16 13:25:22.334272 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 13:25:22.681526 sshd[4117]: Connection closed by 139.178.89.65 port 44134 Dec 16 13:25:22.683276 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:22.690910 systemd[1]: sshd@12-172.236.100.113:22-139.178.89.65:44134.service: Deactivated successfully. Dec 16 13:25:22.695469 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 13:25:22.698964 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Dec 16 13:25:22.700872 systemd-logind[1540]: Removed session 13. Dec 16 13:25:22.750765 systemd[1]: Started sshd@13-172.236.100.113:22-139.178.89.65:44148.service - OpenSSH per-connection server daemon (139.178.89.65:44148). Dec 16 13:25:23.116954 sshd[4129]: Accepted publickey for core from 139.178.89.65 port 44148 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:23.118638 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:23.126219 systemd-logind[1540]: New session 14 of user core. Dec 16 13:25:23.134271 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 13:25:23.512846 sshd[4138]: Connection closed by 139.178.89.65 port 44148 Dec 16 13:25:23.513751 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:23.520347 systemd[1]: sshd@13-172.236.100.113:22-139.178.89.65:44148.service: Deactivated successfully. Dec 16 13:25:23.524057 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 13:25:23.527650 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Dec 16 13:25:23.529411 systemd-logind[1540]: Removed session 14. Dec 16 13:25:23.589051 systemd[1]: Started sshd@14-172.236.100.113:22-139.178.89.65:44154.service - OpenSSH per-connection server daemon (139.178.89.65:44154). Dec 16 13:25:23.946902 sshd[4163]: Accepted publickey for core from 139.178.89.65 port 44154 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:23.948696 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:23.955779 systemd-logind[1540]: New session 15 of user core. Dec 16 13:25:23.960176 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 13:25:24.871153 sshd[4166]: Connection closed by 139.178.89.65 port 44154 Dec 16 13:25:24.872763 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:24.877907 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Dec 16 13:25:24.879110 systemd[1]: sshd@14-172.236.100.113:22-139.178.89.65:44154.service: Deactivated successfully. Dec 16 13:25:24.883184 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 13:25:24.886804 systemd-logind[1540]: Removed session 15. Dec 16 13:25:24.931482 systemd[1]: Started sshd@15-172.236.100.113:22-139.178.89.65:44164.service - OpenSSH per-connection server daemon (139.178.89.65:44164). Dec 16 13:25:25.280519 sshd[4185]: Accepted publickey for core from 139.178.89.65 port 44164 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:25.282468 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:25.290059 systemd-logind[1540]: New session 16 of user core. Dec 16 13:25:25.298246 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 13:25:25.728802 sshd[4188]: Connection closed by 139.178.89.65 port 44164 Dec 16 13:25:25.729852 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:25.736939 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Dec 16 13:25:25.737254 systemd[1]: sshd@15-172.236.100.113:22-139.178.89.65:44164.service: Deactivated successfully. Dec 16 13:25:25.740165 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 13:25:25.742922 systemd-logind[1540]: Removed session 16. Dec 16 13:25:25.798409 systemd[1]: Started sshd@16-172.236.100.113:22-139.178.89.65:44176.service - OpenSSH per-connection server daemon (139.178.89.65:44176). Dec 16 13:25:26.149212 sshd[4198]: Accepted publickey for core from 139.178.89.65 port 44176 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:26.151425 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:26.157961 systemd-logind[1540]: New session 17 of user core. Dec 16 13:25:26.162265 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 13:25:26.475555 sshd[4201]: Connection closed by 139.178.89.65 port 44176 Dec 16 13:25:26.477439 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:26.485922 systemd[1]: sshd@16-172.236.100.113:22-139.178.89.65:44176.service: Deactivated successfully. Dec 16 13:25:26.490513 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 13:25:26.491840 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Dec 16 13:25:26.494317 systemd-logind[1540]: Removed session 17. Dec 16 13:25:29.971541 kubelet[2737]: E1216 13:25:29.971383 2737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Dec 16 13:25:31.545668 systemd[1]: Started sshd@17-172.236.100.113:22-139.178.89.65:50142.service - OpenSSH per-connection server daemon (139.178.89.65:50142). Dec 16 13:25:31.905289 sshd[4236]: Accepted publickey for core from 139.178.89.65 port 50142 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:31.906992 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:31.914876 systemd-logind[1540]: New session 18 of user core. Dec 16 13:25:31.917180 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 13:25:32.260533 sshd[4239]: Connection closed by 139.178.89.65 port 50142 Dec 16 13:25:32.261450 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:32.268191 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Dec 16 13:25:32.269346 systemd[1]: sshd@17-172.236.100.113:22-139.178.89.65:50142.service: Deactivated successfully. Dec 16 13:25:32.272499 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 13:25:32.274733 systemd-logind[1540]: Removed session 18. Dec 16 13:25:37.326366 systemd[1]: Started sshd@18-172.236.100.113:22-139.178.89.65:50148.service - OpenSSH per-connection server daemon (139.178.89.65:50148). Dec 16 13:25:37.694190 sshd[4272]: Accepted publickey for core from 139.178.89.65 port 50148 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:37.696191 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:37.702145 systemd-logind[1540]: New session 19 of user core. Dec 16 13:25:37.713204 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 13:25:38.038991 sshd[4275]: Connection closed by 139.178.89.65 port 50148 Dec 16 13:25:38.041258 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:38.049542 systemd[1]: sshd@18-172.236.100.113:22-139.178.89.65:50148.service: Deactivated successfully. Dec 16 13:25:38.054399 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 13:25:38.056261 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Dec 16 13:25:38.057971 systemd-logind[1540]: Removed session 19. Dec 16 13:25:43.126328 systemd[1]: Started sshd@19-172.236.100.113:22-139.178.89.65:46848.service - OpenSSH per-connection server daemon (139.178.89.65:46848). Dec 16 13:25:43.517872 sshd[4314]: Accepted publickey for core from 139.178.89.65 port 46848 ssh2: RSA SHA256:LWMLg6AOKB1Iv8aiZY9bxiLvWkX87UnOkwjdStm107I Dec 16 13:25:43.520003 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 13:25:43.536510 systemd-logind[1540]: New session 20 of user core. Dec 16 13:25:43.541479 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 13:25:43.858352 sshd[4332]: Connection closed by 139.178.89.65 port 46848 Dec 16 13:25:43.859408 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Dec 16 13:25:43.867555 systemd[1]: sshd@19-172.236.100.113:22-139.178.89.65:46848.service: Deactivated successfully. Dec 16 13:25:43.871565 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 13:25:43.873470 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Dec 16 13:25:43.875855 systemd-logind[1540]: Removed session 20.