Dec 12 18:45:16.900813 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:45:16.900836 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:45:16.900845 kernel: BIOS-provided physical RAM map: Dec 12 18:45:16.900852 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Dec 12 18:45:16.900858 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Dec 12 18:45:16.900864 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 12 18:45:16.900873 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 12 18:45:16.900880 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 12 18:45:16.900886 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 12 18:45:16.900892 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 12 18:45:16.900898 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:45:16.900904 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 12 18:45:16.900911 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Dec 12 18:45:16.900917 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:45:16.900927 kernel: NX (Execute Disable) protection: active Dec 12 18:45:16.900933 kernel: APIC: Static calls initialized Dec 12 18:45:16.900940 kernel: SMBIOS 2.8 present. Dec 12 18:45:16.900947 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Dec 12 18:45:16.900954 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:45:16.900960 kernel: Hypervisor detected: KVM Dec 12 18:45:16.900969 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:45:16.900976 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:45:16.900982 kernel: kvm-clock: using sched offset of 7173431680 cycles Dec 12 18:45:16.900989 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:45:16.900996 kernel: tsc: Detected 2000.000 MHz processor Dec 12 18:45:16.901003 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:45:16.901011 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:45:16.901017 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Dec 12 18:45:16.901024 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 12 18:45:16.901032 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:45:16.901041 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 12 18:45:16.901047 kernel: Using GB pages for direct mapping Dec 12 18:45:16.901054 kernel: ACPI: Early table checksum verification disabled Dec 12 18:45:16.901061 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Dec 12 18:45:16.901068 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:45:16.901075 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:45:16.901082 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:45:16.901088 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 12 18:45:16.901095 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:45:16.901149 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:45:16.901162 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:45:16.901169 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:45:16.901176 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Dec 12 18:45:16.901184 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Dec 12 18:45:16.901193 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 12 18:45:16.901200 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Dec 12 18:45:16.901207 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Dec 12 18:45:16.901215 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Dec 12 18:45:16.901222 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Dec 12 18:45:16.901229 kernel: No NUMA configuration found Dec 12 18:45:16.901236 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Dec 12 18:45:16.901243 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Dec 12 18:45:16.901251 kernel: Zone ranges: Dec 12 18:45:16.901260 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:45:16.901267 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 12 18:45:16.901274 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:45:16.901281 kernel: Device empty Dec 12 18:45:16.901288 kernel: Movable zone start for each node Dec 12 18:45:16.901295 kernel: Early memory node ranges Dec 12 18:45:16.901302 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 12 18:45:16.901309 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 12 18:45:16.901316 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Dec 12 18:45:16.901323 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Dec 12 18:45:16.901333 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:45:16.901340 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 12 18:45:16.901347 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Dec 12 18:45:16.901354 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:45:16.901361 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:45:16.901368 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:45:16.901375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:45:16.901382 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:45:16.901389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:45:16.901399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:45:16.901406 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:45:16.901413 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:45:16.901420 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:45:16.901427 kernel: TSC deadline timer available Dec 12 18:45:16.901434 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:45:16.901441 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:45:16.901448 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:45:16.901455 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:45:16.901464 kernel: CPU topo: Num. cores per package: 2 Dec 12 18:45:16.901472 kernel: CPU topo: Num. threads per package: 2 Dec 12 18:45:16.901479 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 12 18:45:16.901486 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:45:16.901493 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:45:16.901500 kernel: kvm-guest: setup PV sched yield Dec 12 18:45:16.901507 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 12 18:45:16.901514 kernel: Booting paravirtualized kernel on KVM Dec 12 18:45:16.901521 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:45:16.901531 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 12 18:45:16.901538 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 12 18:45:16.901545 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 12 18:45:16.901552 kernel: pcpu-alloc: [0] 0 1 Dec 12 18:45:16.901559 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:45:16.901566 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:45:16.901574 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:45:16.901582 kernel: random: crng init done Dec 12 18:45:16.901591 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:45:16.901598 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:45:16.901606 kernel: Fallback order for Node 0: 0 Dec 12 18:45:16.901613 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Dec 12 18:45:16.901620 kernel: Policy zone: Normal Dec 12 18:45:16.901627 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:45:16.901634 kernel: software IO TLB: area num 2. Dec 12 18:45:16.901641 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 18:45:16.901648 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:45:16.901658 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:45:16.901665 kernel: Dynamic Preempt: voluntary Dec 12 18:45:16.901672 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:45:16.901683 kernel: rcu: RCU event tracing is enabled. Dec 12 18:45:16.901690 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 18:45:16.901698 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:45:16.901705 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:45:16.901712 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:45:16.901719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:45:16.901729 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 18:45:16.901736 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:45:16.901750 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:45:16.901760 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 18:45:16.901768 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 12 18:45:16.901775 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:45:16.901783 kernel: Console: colour VGA+ 80x25 Dec 12 18:45:16.901790 kernel: printk: legacy console [tty0] enabled Dec 12 18:45:16.901797 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:45:16.901805 kernel: ACPI: Core revision 20240827 Dec 12 18:45:16.901814 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:45:16.901821 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:45:16.901828 kernel: x2apic enabled Dec 12 18:45:16.901835 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:45:16.901843 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:45:16.901850 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:45:16.901857 kernel: kvm-guest: setup PV IPIs Dec 12 18:45:16.901866 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:45:16.901873 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:45:16.901881 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Dec 12 18:45:16.901888 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:45:16.901895 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:45:16.901902 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:45:16.901909 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:45:16.901916 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:45:16.901924 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:45:16.901933 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 12 18:45:16.901940 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:45:16.901947 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:45:16.901955 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:45:16.901962 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:45:16.901970 kernel: active return thunk: srso_alias_return_thunk Dec 12 18:45:16.901977 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:45:16.901984 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Dec 12 18:45:16.901993 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Dec 12 18:45:16.902000 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:45:16.902007 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:45:16.902015 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:45:16.902022 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 12 18:45:16.902029 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:45:16.902036 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Dec 12 18:45:16.902044 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Dec 12 18:45:16.902051 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:45:16.902060 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:45:16.902067 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:45:16.902074 kernel: landlock: Up and running. Dec 12 18:45:16.902081 kernel: SELinux: Initializing. Dec 12 18:45:16.902089 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:45:16.902096 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:45:16.902157 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Dec 12 18:45:16.906120 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:45:16.906132 kernel: ... version: 0 Dec 12 18:45:16.906145 kernel: ... bit width: 48 Dec 12 18:45:16.906153 kernel: ... generic registers: 6 Dec 12 18:45:16.906160 kernel: ... value mask: 0000ffffffffffff Dec 12 18:45:16.906168 kernel: ... max period: 00007fffffffffff Dec 12 18:45:16.906175 kernel: ... fixed-purpose events: 0 Dec 12 18:45:16.906183 kernel: ... event mask: 000000000000003f Dec 12 18:45:16.906190 kernel: signal: max sigframe size: 3376 Dec 12 18:45:16.906197 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:45:16.906205 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:45:16.906215 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:45:16.906223 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:45:16.906230 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:45:16.906237 kernel: .... node #0, CPUs: #1 Dec 12 18:45:16.906245 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 18:45:16.906252 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Dec 12 18:45:16.906260 kernel: Memory: 3952856K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 235488K reserved, 0K cma-reserved) Dec 12 18:45:16.906297 kernel: devtmpfs: initialized Dec 12 18:45:16.906305 kernel: x86/mm: Memory block size: 128MB Dec 12 18:45:16.906315 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:45:16.906322 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 18:45:16.906330 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:45:16.906337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:45:16.906344 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:45:16.906352 kernel: audit: type=2000 audit(1765565113.533:1): state=initialized audit_enabled=0 res=1 Dec 12 18:45:16.906359 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:45:16.906366 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:45:16.906374 kernel: cpuidle: using governor menu Dec 12 18:45:16.906396 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:45:16.906403 kernel: dca service started, version 1.12.1 Dec 12 18:45:16.906411 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 12 18:45:16.906418 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 12 18:45:16.906425 kernel: PCI: Using configuration type 1 for base access Dec 12 18:45:16.906433 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:45:16.906440 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:45:16.906447 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:45:16.906455 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:45:16.906464 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:45:16.906472 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:45:16.906479 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:45:16.906486 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:45:16.906493 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:45:16.906501 kernel: ACPI: Interpreter enabled Dec 12 18:45:16.906508 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:45:16.906515 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:45:16.906523 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:45:16.906532 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:45:16.906539 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:45:16.906547 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:45:16.906732 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:45:16.906862 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:45:16.906983 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:45:16.906994 kernel: PCI host bridge to bus 0000:00 Dec 12 18:45:16.907372 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:45:16.907497 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:45:16.907610 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:45:16.907721 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 12 18:45:16.907831 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 12 18:45:16.907972 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Dec 12 18:45:16.908086 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:45:16.910270 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:45:16.910417 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:45:16.910542 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Dec 12 18:45:16.910703 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Dec 12 18:45:16.910830 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Dec 12 18:45:16.910950 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:45:16.911085 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:45:16.916574 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Dec 12 18:45:16.916706 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Dec 12 18:45:16.916829 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Dec 12 18:45:16.916961 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:45:16.917082 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Dec 12 18:45:16.917461 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Dec 12 18:45:16.917604 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Dec 12 18:45:16.917724 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Dec 12 18:45:16.917853 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:45:16.917996 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:45:16.919170 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:45:16.919310 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Dec 12 18:45:16.919433 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Dec 12 18:45:16.919581 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:45:16.919703 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 12 18:45:16.919713 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:45:16.919721 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:45:16.919729 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:45:16.919736 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:45:16.919743 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:45:16.919754 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:45:16.919762 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:45:16.919769 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:45:16.919776 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:45:16.919784 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:45:16.919791 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:45:16.919799 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:45:16.919806 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:45:16.919813 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:45:16.919823 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:45:16.919830 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:45:16.919838 kernel: iommu: Default domain type: Translated Dec 12 18:45:16.919845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:45:16.919852 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:45:16.919860 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:45:16.919867 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Dec 12 18:45:16.919874 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 12 18:45:16.919995 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:45:16.920147 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:45:16.920273 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:45:16.920283 kernel: vgaarb: loaded Dec 12 18:45:16.920291 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:45:16.920298 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:45:16.920306 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:45:16.920313 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:45:16.920341 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:45:16.920369 kernel: pnp: PnP ACPI init Dec 12 18:45:16.920802 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 12 18:45:16.920840 kernel: pnp: PnP ACPI: found 5 devices Dec 12 18:45:16.920868 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:45:16.920897 kernel: NET: Registered PF_INET protocol family Dec 12 18:45:16.920921 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:45:16.920949 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:45:16.920976 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:45:16.921000 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:45:16.921011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:45:16.921018 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:45:16.921026 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:45:16.921033 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:45:16.921041 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:45:16.921048 kernel: NET: Registered PF_XDP protocol family Dec 12 18:45:16.922286 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:45:16.922409 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:45:16.922521 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:45:16.922639 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 12 18:45:16.922749 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 12 18:45:16.922859 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Dec 12 18:45:16.922869 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:45:16.922877 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 12 18:45:16.922885 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Dec 12 18:45:16.922893 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Dec 12 18:45:16.922900 kernel: Initialise system trusted keyrings Dec 12 18:45:16.922911 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:45:16.922919 kernel: Key type asymmetric registered Dec 12 18:45:16.922926 kernel: Asymmetric key parser 'x509' registered Dec 12 18:45:16.922933 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:45:16.922941 kernel: io scheduler mq-deadline registered Dec 12 18:45:16.922948 kernel: io scheduler kyber registered Dec 12 18:45:16.922956 kernel: io scheduler bfq registered Dec 12 18:45:16.922963 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:45:16.922971 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:45:16.922981 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:45:16.922989 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:45:16.922996 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:45:16.923004 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:45:16.923011 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:45:16.923018 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:45:16.923026 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:45:16.927383 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 12 18:45:16.927511 kernel: rtc_cmos 00:03: registered as rtc0 Dec 12 18:45:16.927626 kernel: rtc_cmos 00:03: setting system clock to 2025-12-12T18:45:16 UTC (1765565116) Dec 12 18:45:16.927740 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 12 18:45:16.927751 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:45:16.927759 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:45:16.927767 kernel: Segment Routing with IPv6 Dec 12 18:45:16.927774 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:45:16.927782 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:45:16.927790 kernel: Key type dns_resolver registered Dec 12 18:45:16.927801 kernel: IPI shorthand broadcast: enabled Dec 12 18:45:16.927809 kernel: sched_clock: Marking stable (2816003820, 347495460)->(3248859700, -85360420) Dec 12 18:45:16.927817 kernel: registered taskstats version 1 Dec 12 18:45:16.927825 kernel: Loading compiled-in X.509 certificates Dec 12 18:45:16.927833 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:45:16.927840 kernel: Demotion targets for Node 0: null Dec 12 18:45:16.927848 kernel: Key type .fscrypt registered Dec 12 18:45:16.927856 kernel: Key type fscrypt-provisioning registered Dec 12 18:45:16.927864 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:45:16.927874 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:45:16.927882 kernel: ima: No architecture policies found Dec 12 18:45:16.928004 kernel: clk: Disabling unused clocks Dec 12 18:45:16.928033 kernel: Warning: unable to open an initial console. Dec 12 18:45:16.928044 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:45:16.928053 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:45:16.928062 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:45:16.928070 kernel: Run /init as init process Dec 12 18:45:16.928079 kernel: with arguments: Dec 12 18:45:16.928099 kernel: /init Dec 12 18:45:16.928128 kernel: with environment: Dec 12 18:45:16.928155 kernel: HOME=/ Dec 12 18:45:16.928167 kernel: TERM=linux Dec 12 18:45:16.928179 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:45:16.928194 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:45:16.928204 systemd[1]: Detected virtualization kvm. Dec 12 18:45:16.928216 systemd[1]: Detected architecture x86-64. Dec 12 18:45:16.928224 systemd[1]: Running in initrd. Dec 12 18:45:16.928233 systemd[1]: No hostname configured, using default hostname. Dec 12 18:45:16.928243 systemd[1]: Hostname set to . Dec 12 18:45:16.928252 systemd[1]: Initializing machine ID from random generator. Dec 12 18:45:16.928261 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:45:16.928271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:45:16.928280 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:45:16.928294 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:45:16.928303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:45:16.928312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:45:16.928322 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:45:16.928333 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:45:16.928342 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:45:16.928352 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:45:16.928364 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:45:16.928373 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:45:16.928382 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:45:16.928391 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:45:16.928400 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:45:16.928412 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:45:16.928422 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:45:16.928431 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:45:16.928440 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:45:16.928452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:45:16.928462 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:45:16.928473 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:45:16.928483 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:45:16.928492 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:45:16.928504 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:45:16.928514 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:45:16.928523 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:45:16.928537 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:45:16.928553 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:45:16.928568 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:45:16.928580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:16.928645 systemd-journald[187]: Collecting audit messages is disabled. Dec 12 18:45:16.928674 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:45:16.928688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:45:16.928697 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:45:16.928707 kernel: Bridge firewalling registered Dec 12 18:45:16.928718 systemd-journald[187]: Journal started Dec 12 18:45:16.928740 systemd-journald[187]: Runtime Journal (/run/log/journal/13767eaf233041a9a7b50d2e96ca4f27) is 8M, max 78.2M, 70.2M free. Dec 12 18:45:16.898277 systemd-modules-load[188]: Inserted module 'overlay' Dec 12 18:45:16.928871 systemd-modules-load[188]: Inserted module 'br_netfilter' Dec 12 18:45:16.935215 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:45:16.935818 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:45:16.944259 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:45:17.032201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:17.035995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:45:17.039239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:45:17.042150 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:45:17.048544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:45:17.064295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:45:17.066669 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:45:17.068938 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:45:17.071751 systemd-tmpfiles[204]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:45:17.072024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:45:17.076267 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:45:17.088293 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:45:17.092354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:45:17.105365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:45:17.112419 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:45:17.143623 systemd-resolved[224]: Positive Trust Anchors: Dec 12 18:45:17.143635 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:45:17.143660 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:45:17.147912 systemd-resolved[224]: Defaulting to hostname 'linux'. Dec 12 18:45:17.151679 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:45:17.152941 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:45:17.212146 kernel: SCSI subsystem initialized Dec 12 18:45:17.221174 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:45:17.232148 kernel: iscsi: registered transport (tcp) Dec 12 18:45:17.252447 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:45:17.252476 kernel: QLogic iSCSI HBA Driver Dec 12 18:45:17.275825 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:45:17.295469 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:45:17.298011 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:45:17.354951 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:45:17.357063 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:45:17.408136 kernel: raid6: avx2x4 gen() 32765 MB/s Dec 12 18:45:17.426137 kernel: raid6: avx2x2 gen() 30928 MB/s Dec 12 18:45:17.444449 kernel: raid6: avx2x1 gen() 22003 MB/s Dec 12 18:45:17.444473 kernel: raid6: using algorithm avx2x4 gen() 32765 MB/s Dec 12 18:45:17.465546 kernel: raid6: .... xor() 4498 MB/s, rmw enabled Dec 12 18:45:17.465566 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:45:17.488137 kernel: xor: automatically using best checksumming function avx Dec 12 18:45:17.624155 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:45:17.632504 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:45:17.635330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:45:17.665329 systemd-udevd[434]: Using default interface naming scheme 'v255'. Dec 12 18:45:17.671194 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:45:17.675288 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:45:17.706190 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Dec 12 18:45:17.737568 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:45:17.740291 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:45:17.814600 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:45:17.818228 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:45:17.893146 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Dec 12 18:45:17.897142 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:45:17.912539 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:45:17.923125 kernel: scsi host0: Virtio SCSI HBA Dec 12 18:45:17.926266 kernel: AES CTR mode by8 optimization enabled Dec 12 18:45:17.938140 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 12 18:45:17.949485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:45:17.950825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:17.954322 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:18.056740 kernel: libata version 3.00 loaded. Dec 12 18:45:17.968223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:18.065408 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:18.076221 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 12 18:45:18.082166 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Dec 12 18:45:18.082354 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 12 18:45:18.082510 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 12 18:45:18.087758 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 12 18:45:18.094161 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:45:18.108604 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:45:18.115926 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:45:18.116135 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:45:18.116288 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:45:18.120262 kernel: scsi host1: ahci Dec 12 18:45:18.120442 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:45:18.123436 kernel: scsi host2: ahci Dec 12 18:45:18.123480 kernel: GPT:9289727 != 167739391 Dec 12 18:45:18.125047 kernel: scsi host3: ahci Dec 12 18:45:18.125083 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:45:18.125095 kernel: GPT:9289727 != 167739391 Dec 12 18:45:18.125123 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:45:18.125134 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:45:18.128307 kernel: scsi host4: ahci Dec 12 18:45:18.128346 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 12 18:45:18.139234 kernel: scsi host5: ahci Dec 12 18:45:18.149152 kernel: scsi host6: ahci Dec 12 18:45:18.149363 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Dec 12 18:45:18.149376 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Dec 12 18:45:18.149386 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Dec 12 18:45:18.149401 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Dec 12 18:45:18.149411 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Dec 12 18:45:18.149421 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Dec 12 18:45:18.193757 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 12 18:45:18.277817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:18.292687 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 12 18:45:18.307089 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:45:18.314288 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 12 18:45:18.315067 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 12 18:45:18.318528 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:45:18.340755 disk-uuid[606]: Primary Header is updated. Dec 12 18:45:18.340755 disk-uuid[606]: Secondary Entries is updated. Dec 12 18:45:18.340755 disk-uuid[606]: Secondary Header is updated. Dec 12 18:45:18.353284 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:45:18.366158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:45:18.466831 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:45:18.467411 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 12 18:45:18.467424 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:45:18.467434 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:45:18.471142 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:45:18.471171 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:45:18.529617 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:45:18.549253 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:45:18.551055 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:45:18.551897 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:45:18.555270 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:45:18.577431 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:45:19.374221 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 18:45:19.376433 disk-uuid[607]: The operation has completed successfully. Dec 12 18:45:19.436886 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:45:19.437032 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:45:19.462145 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:45:19.477370 sh[635]: Success Dec 12 18:45:19.500497 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:45:19.500575 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:45:19.503960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:45:19.515145 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:45:19.558572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:45:19.563209 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:45:19.574131 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:45:19.586146 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (647) Dec 12 18:45:19.586224 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:45:19.589728 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:19.600536 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 18:45:19.600570 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:45:19.604725 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:45:19.606822 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:45:19.609219 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:45:19.610925 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:45:19.612256 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:45:19.616608 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:45:19.648161 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (681) Dec 12 18:45:19.655315 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:19.655341 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:19.663299 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:45:19.663323 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:45:19.663336 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:45:19.674496 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:19.676591 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:45:19.678445 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:45:19.796554 ignition[745]: Ignition 2.22.0 Dec 12 18:45:19.797688 ignition[745]: Stage: fetch-offline Dec 12 18:45:19.797741 ignition[745]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:19.797753 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:19.797867 ignition[745]: parsed url from cmdline: "" Dec 12 18:45:19.803355 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:45:19.797871 ignition[745]: no config URL provided Dec 12 18:45:19.797878 ignition[745]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:45:19.797892 ignition[745]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:45:19.806511 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:45:19.797901 ignition[745]: failed to fetch config: resource requires networking Dec 12 18:45:19.798083 ignition[745]: Ignition finished successfully Dec 12 18:45:19.810426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:45:19.852862 systemd-networkd[821]: lo: Link UP Dec 12 18:45:19.852874 systemd-networkd[821]: lo: Gained carrier Dec 12 18:45:19.854482 systemd-networkd[821]: Enumeration completed Dec 12 18:45:19.854576 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:45:19.855375 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:19.855381 systemd-networkd[821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:45:19.856374 systemd[1]: Reached target network.target - Network. Dec 12 18:45:19.857590 systemd-networkd[821]: eth0: Link UP Dec 12 18:45:19.857801 systemd-networkd[821]: eth0: Gained carrier Dec 12 18:45:19.857811 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:19.860211 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 18:45:19.893372 ignition[825]: Ignition 2.22.0 Dec 12 18:45:19.893389 ignition[825]: Stage: fetch Dec 12 18:45:19.893549 ignition[825]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:19.893563 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:19.893655 ignition[825]: parsed url from cmdline: "" Dec 12 18:45:19.893660 ignition[825]: no config URL provided Dec 12 18:45:19.893666 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:45:19.893676 ignition[825]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:45:19.893714 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #1 Dec 12 18:45:19.893928 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:45:20.094196 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #2 Dec 12 18:45:20.094373 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:45:20.495047 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #3 Dec 12 18:45:20.495299 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 12 18:45:20.671211 systemd-networkd[821]: eth0: DHCPv4 address 172.237.139.125/24, gateway 172.237.139.1 acquired from 23.192.120.14 Dec 12 18:45:21.296409 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #4 Dec 12 18:45:21.391916 ignition[825]: PUT result: OK Dec 12 18:45:21.392209 ignition[825]: GET http://169.254.169.254/v1/user-data: attempt #1 Dec 12 18:45:21.497522 ignition[825]: GET result: OK Dec 12 18:45:21.497685 ignition[825]: parsing config with SHA512: e0dfdd88d431f1aef3c3d1b4a2e990d8f24f9715bdf6a1118d557a079c75740e49218f38f7d7f0b4b236d5bce3d4129e0ca4b9ed3c34e623f742ddc675660272 Dec 12 18:45:21.502671 unknown[825]: fetched base config from "system" Dec 12 18:45:21.502685 unknown[825]: fetched base config from "system" Dec 12 18:45:21.503220 ignition[825]: fetch: fetch complete Dec 12 18:45:21.502692 unknown[825]: fetched user config from "akamai" Dec 12 18:45:21.503226 ignition[825]: fetch: fetch passed Dec 12 18:45:21.506271 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 18:45:21.503270 ignition[825]: Ignition finished successfully Dec 12 18:45:21.515249 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:45:21.549846 ignition[833]: Ignition 2.22.0 Dec 12 18:45:21.550753 ignition[833]: Stage: kargs Dec 12 18:45:21.550900 ignition[833]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:21.550913 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:21.551912 ignition[833]: kargs: kargs passed Dec 12 18:45:21.551975 ignition[833]: Ignition finished successfully Dec 12 18:45:21.557032 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:45:21.559063 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:45:21.591194 ignition[839]: Ignition 2.22.0 Dec 12 18:45:21.591210 ignition[839]: Stage: disks Dec 12 18:45:21.591345 ignition[839]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:21.591359 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:21.592329 ignition[839]: disks: disks passed Dec 12 18:45:21.594996 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:45:21.592378 ignition[839]: Ignition finished successfully Dec 12 18:45:21.597061 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:45:21.598620 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:45:21.600310 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:45:21.601965 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:45:21.603462 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:45:21.606247 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:45:21.649457 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:45:21.653997 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:45:21.657540 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:45:21.771159 kernel: EXT4-fs (sda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:45:21.771577 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:45:21.772832 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:45:21.775213 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:45:21.778187 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:45:21.779888 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:45:21.779941 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:45:21.779966 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:45:21.792484 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:45:21.794657 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:45:21.806450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Dec 12 18:45:21.806491 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:21.808375 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:21.817802 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:45:21.817847 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:45:21.820197 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:45:21.825586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:45:21.852400 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:45:21.858457 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:45:21.863274 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:45:21.868518 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:45:21.915300 systemd-networkd[821]: eth0: Gained IPv6LL Dec 12 18:45:21.979660 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:45:21.982342 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:45:21.985269 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:45:22.001433 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:45:22.005139 kernel: BTRFS info (device sda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:22.023040 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:45:22.035689 ignition[969]: INFO : Ignition 2.22.0 Dec 12 18:45:22.035689 ignition[969]: INFO : Stage: mount Dec 12 18:45:22.039271 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:22.039271 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:22.039271 ignition[969]: INFO : mount: mount passed Dec 12 18:45:22.039271 ignition[969]: INFO : Ignition finished successfully Dec 12 18:45:22.041612 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:45:22.044619 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:45:22.773226 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:45:22.797158 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (980) Dec 12 18:45:22.797249 kernel: BTRFS info (device sda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:45:22.803566 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:45:22.813757 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 12 18:45:22.813802 kernel: BTRFS info (device sda6): turning on async discard Dec 12 18:45:22.817878 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 18:45:22.821760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:45:22.857327 ignition[997]: INFO : Ignition 2.22.0 Dec 12 18:45:22.857327 ignition[997]: INFO : Stage: files Dec 12 18:45:22.859424 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:22.859424 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:22.859424 ignition[997]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:45:22.862953 ignition[997]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:45:22.862953 ignition[997]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:45:22.862953 ignition[997]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:45:22.866644 ignition[997]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:45:22.866644 ignition[997]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:45:22.865679 unknown[997]: wrote ssh authorized keys file for user: core Dec 12 18:45:22.869929 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:45:22.869929 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 12 18:45:22.979996 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:45:23.115076 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 12 18:45:23.115076 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 18:45:23.118253 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 12 18:45:23.334636 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 12 18:45:23.409856 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 12 18:45:23.409856 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:45:23.412331 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:45:23.412331 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:45:23.412331 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:45:23.412331 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:45:23.412331 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:45:23.412331 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:45:23.412331 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:45:23.427388 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:45:23.427388 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:45:23.427388 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:45:23.427388 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:45:23.427388 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:45:23.427388 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 12 18:45:23.971654 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 12 18:45:24.204162 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 12 18:45:24.204162 ignition[997]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 12 18:45:24.208238 ignition[997]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:45:24.210505 ignition[997]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:45:24.210505 ignition[997]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 12 18:45:24.210505 ignition[997]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 12 18:45:24.216000 ignition[997]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:45:24.216000 ignition[997]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 12 18:45:24.216000 ignition[997]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 12 18:45:24.216000 ignition[997]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:45:24.216000 ignition[997]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:45:24.216000 ignition[997]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:45:24.216000 ignition[997]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:45:24.216000 ignition[997]: INFO : files: files passed Dec 12 18:45:24.216000 ignition[997]: INFO : Ignition finished successfully Dec 12 18:45:24.214814 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:45:24.219245 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:45:24.223452 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:45:24.241515 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:45:24.241649 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:45:24.249836 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:24.249836 initrd-setup-root-after-ignition[1027]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:24.252721 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:45:24.254572 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:45:24.255800 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:45:24.258320 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:45:24.324325 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:45:24.324469 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:45:24.326642 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:45:24.328044 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:45:24.329894 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:45:24.330849 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:45:24.356179 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:45:24.358874 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:45:24.383239 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:45:24.384221 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:45:24.385907 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:45:24.387516 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:45:24.387662 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:45:24.389516 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:45:24.390605 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:45:24.392186 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:45:24.393681 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:45:24.395191 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:45:24.396835 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:45:24.398591 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:45:24.400234 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:45:24.401895 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:45:24.403533 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:45:24.405326 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:45:24.406861 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:45:24.407013 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:45:24.409221 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:45:24.410288 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:45:24.411797 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:45:24.413469 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:45:24.414331 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:45:24.414433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:45:24.416485 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:45:24.416651 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:45:24.417668 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:45:24.417842 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:45:24.421199 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:45:24.422569 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:45:24.422695 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:45:24.428902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:45:24.430977 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:45:24.431177 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:45:24.432677 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:45:24.432775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:45:24.452089 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:45:24.452253 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:45:24.464788 ignition[1051]: INFO : Ignition 2.22.0 Dec 12 18:45:24.466593 ignition[1051]: INFO : Stage: umount Dec 12 18:45:24.466593 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:45:24.466593 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Dec 12 18:45:24.466593 ignition[1051]: INFO : umount: umount passed Dec 12 18:45:24.466593 ignition[1051]: INFO : Ignition finished successfully Dec 12 18:45:24.471445 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:45:24.471578 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:45:24.475496 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:45:24.476353 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:45:24.476443 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:45:24.479370 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:45:24.479424 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:45:24.480921 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 18:45:24.481189 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 18:45:24.484249 systemd[1]: Stopped target network.target - Network. Dec 12 18:45:24.485689 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:45:24.485747 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:45:24.487266 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:45:24.488668 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:45:24.492152 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:45:24.493502 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:45:24.495348 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:45:24.497124 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:45:24.497173 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:45:24.499051 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:45:24.499094 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:45:24.501053 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:45:24.501139 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:45:24.502562 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:45:24.502612 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:45:24.504300 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:45:24.505666 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:45:24.511812 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:45:24.511925 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:45:24.515253 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:45:24.515353 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:45:24.517643 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:45:24.517820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:45:24.522715 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:45:24.522935 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:45:24.523300 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:45:24.526470 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:45:24.527576 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:45:24.528880 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:45:24.528925 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:45:24.531595 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:45:24.533337 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:45:24.533392 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:45:24.537305 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:45:24.537393 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:45:24.539245 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:45:24.539301 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:45:24.541022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:45:24.541085 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:45:24.545425 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:45:24.551367 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:45:24.551437 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:24.565933 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:45:24.566068 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:45:24.569144 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:45:24.569347 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:45:24.571599 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:45:24.572092 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:45:24.573456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:45:24.573503 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:45:24.575027 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:45:24.575089 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:45:24.577321 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:45:24.577375 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:45:24.578870 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:45:24.578920 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:45:24.581314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:45:24.583587 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:45:24.583650 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:45:24.587226 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:45:24.587281 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:45:24.590358 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:45:24.590427 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:24.593372 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:45:24.593431 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:45:24.593481 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:45:24.601100 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:45:24.602184 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:45:24.603761 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:45:24.612625 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:45:24.632964 systemd[1]: Switching root. Dec 12 18:45:24.660167 systemd-journald[187]: Journal stopped Dec 12 18:45:25.985337 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Dec 12 18:45:25.985369 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:45:25.985381 kernel: SELinux: policy capability open_perms=1 Dec 12 18:45:25.985393 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:45:25.985408 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:45:25.985426 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:45:25.985438 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:45:25.985449 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:45:25.985458 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:45:25.985468 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:45:25.985478 kernel: audit: type=1403 audit(1765565124.851:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:45:25.985488 systemd[1]: Successfully loaded SELinux policy in 76.444ms. Dec 12 18:45:25.985502 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.430ms. Dec 12 18:45:25.985513 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:45:25.985524 systemd[1]: Detected virtualization kvm. Dec 12 18:45:25.985535 systemd[1]: Detected architecture x86-64. Dec 12 18:45:25.985547 systemd[1]: Detected first boot. Dec 12 18:45:25.985557 systemd[1]: Initializing machine ID from random generator. Dec 12 18:45:25.985568 zram_generator::config[1099]: No configuration found. Dec 12 18:45:25.985579 kernel: Guest personality initialized and is inactive Dec 12 18:45:25.985589 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:45:25.985599 kernel: Initialized host personality Dec 12 18:45:25.985608 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:45:25.985618 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:45:25.985631 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:45:25.985642 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:45:25.985652 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:45:25.985662 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:45:25.985673 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:45:25.985683 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:45:25.985694 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:45:25.985707 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:45:25.985718 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:45:25.985728 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:45:25.985739 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:45:25.985749 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:45:25.985761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:45:25.985771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:45:25.985782 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:45:25.985794 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:45:25.985807 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:45:25.985818 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:45:25.985829 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:45:25.985840 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:45:25.985850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:45:25.985861 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:45:25.985873 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:45:25.985884 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:45:25.985894 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:45:25.985905 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:45:25.985915 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:45:25.985926 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:45:25.985936 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:45:25.985946 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:45:25.985957 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:45:25.985969 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:45:25.985980 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:45:25.985991 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:45:25.986002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:45:25.986015 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:45:25.986025 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:45:25.986036 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:45:25.986046 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:45:25.986057 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:25.986067 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:45:25.986077 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:45:25.986088 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:45:25.986101 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:45:25.986128 systemd[1]: Reached target machines.target - Containers. Dec 12 18:45:25.986139 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:45:25.986149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:25.986160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:45:25.986171 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:45:25.986181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:45:25.986192 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:45:25.986202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:45:25.986216 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:45:25.986226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:45:25.986238 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:45:25.986248 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:45:25.986259 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:45:25.986269 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:45:25.986280 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:45:25.986291 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:25.986303 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:45:25.986314 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:45:25.986325 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:45:25.986335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:45:25.986346 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:45:25.986356 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:45:25.986367 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:45:25.986377 systemd[1]: Stopped verity-setup.service. Dec 12 18:45:25.986390 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:25.986401 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:45:25.986411 kernel: loop: module loaded Dec 12 18:45:25.986421 kernel: fuse: init (API version 7.41) Dec 12 18:45:25.986431 kernel: ACPI: bus type drm_connector registered Dec 12 18:45:25.986441 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:45:25.986452 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:45:25.986463 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:45:25.986474 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:45:25.986511 systemd-journald[1179]: Collecting audit messages is disabled. Dec 12 18:45:25.986531 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:45:25.986542 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:45:25.986553 systemd-journald[1179]: Journal started Dec 12 18:45:25.986575 systemd-journald[1179]: Runtime Journal (/run/log/journal/21c58d7b16e74583b4aff78f9d967668) is 8M, max 78.2M, 70.2M free. Dec 12 18:45:25.556210 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:45:25.582359 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 18:45:25.583068 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:45:25.993334 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:45:25.996721 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:45:25.998262 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:45:25.998643 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:45:26.000033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:45:26.000569 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:45:26.001711 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:45:26.002041 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:45:26.003713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:45:26.004425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:45:26.005620 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:45:26.005832 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:45:26.007078 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:45:26.007387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:45:26.008817 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:45:26.010042 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:45:26.011372 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:45:26.012814 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:45:26.032590 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:45:26.035202 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:45:26.039488 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:45:26.042213 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:45:26.042243 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:45:26.043946 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:45:26.055306 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:45:26.058484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:26.062395 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:45:26.065221 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:45:26.066201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:45:26.070786 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:45:26.072333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:45:26.074436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:45:26.080344 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:45:26.086220 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:45:26.090727 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:45:26.092596 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:45:26.099866 systemd-journald[1179]: Time spent on flushing to /var/log/journal/21c58d7b16e74583b4aff78f9d967668 is 41.098ms for 1011 entries. Dec 12 18:45:26.099866 systemd-journald[1179]: System Journal (/var/log/journal/21c58d7b16e74583b4aff78f9d967668) is 8M, max 195.6M, 187.6M free. Dec 12 18:45:26.158441 systemd-journald[1179]: Received client request to flush runtime journal. Dec 12 18:45:26.158505 kernel: loop0: detected capacity change from 0 to 224512 Dec 12 18:45:26.152067 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:45:26.153022 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:45:26.157525 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:45:26.172617 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:45:26.180813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:45:26.184070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:45:26.202160 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:45:26.218470 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:45:26.221957 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:45:26.227228 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:45:26.233605 kernel: loop1: detected capacity change from 0 to 128560 Dec 12 18:45:26.271420 kernel: loop2: detected capacity change from 0 to 110984 Dec 12 18:45:26.300349 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Dec 12 18:45:26.300368 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Dec 12 18:45:26.307860 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:45:26.322347 kernel: loop3: detected capacity change from 0 to 8 Dec 12 18:45:26.349133 kernel: loop4: detected capacity change from 0 to 224512 Dec 12 18:45:26.373512 kernel: loop5: detected capacity change from 0 to 128560 Dec 12 18:45:26.399140 kernel: loop6: detected capacity change from 0 to 110984 Dec 12 18:45:26.414159 kernel: loop7: detected capacity change from 0 to 8 Dec 12 18:45:26.416046 (sd-merge)[1244]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Dec 12 18:45:26.417719 (sd-merge)[1244]: Merged extensions into '/usr'. Dec 12 18:45:26.428406 systemd[1]: Reload requested from client PID 1220 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:45:26.428581 systemd[1]: Reloading... Dec 12 18:45:26.540137 zram_generator::config[1270]: No configuration found. Dec 12 18:45:26.679148 ldconfig[1215]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:45:26.743879 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:45:26.744954 systemd[1]: Reloading finished in 315 ms. Dec 12 18:45:26.762077 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:45:26.763419 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:45:26.764700 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:45:26.783980 systemd[1]: Starting ensure-sysext.service... Dec 12 18:45:26.788230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:45:26.803245 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:45:26.816797 systemd[1]: Reload requested from client PID 1314 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:45:26.816808 systemd[1]: Reloading... Dec 12 18:45:26.830787 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:45:26.831011 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:45:26.831343 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:45:26.831622 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:45:26.832605 systemd-tmpfiles[1315]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:45:26.832866 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Dec 12 18:45:26.832943 systemd-tmpfiles[1315]: ACLs are not supported, ignoring. Dec 12 18:45:26.846416 systemd-tmpfiles[1315]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:45:26.846434 systemd-tmpfiles[1315]: Skipping /boot Dec 12 18:45:26.847576 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Dec 12 18:45:26.869590 systemd-tmpfiles[1315]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:45:26.869610 systemd-tmpfiles[1315]: Skipping /boot Dec 12 18:45:26.955201 zram_generator::config[1351]: No configuration found. Dec 12 18:45:27.198144 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:45:27.212147 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:45:27.246317 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:45:27.247355 systemd[1]: Reloading finished in 430 ms. Dec 12 18:45:27.257944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:45:27.259367 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:45:27.274161 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:45:27.298461 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:45:27.298815 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:45:27.312908 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:27.316189 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:45:27.319449 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:45:27.322296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:27.325961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:45:27.333183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:45:27.339748 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:45:27.340849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:27.341053 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:27.343462 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:45:27.349217 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:45:27.356439 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:45:27.362627 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:45:27.364305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:27.369497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:45:27.370656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:45:27.377867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:27.379069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:27.391415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:45:27.393278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:27.393557 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:27.393943 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:27.395498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:45:27.397402 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:45:27.413221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:27.413452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:45:27.422343 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:45:27.428456 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:45:27.429606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:45:27.430233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:45:27.432865 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:45:27.434051 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:45:27.435075 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:45:27.436417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:45:27.440355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:45:27.443311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:45:27.463784 systemd[1]: Finished ensure-sysext.service. Dec 12 18:45:27.466030 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:45:27.470814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:45:27.476029 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:45:27.488502 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:45:27.495295 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:45:27.503404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 12 18:45:27.512600 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:45:27.515558 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:45:27.517161 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:45:27.518565 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:45:27.518847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:45:27.522714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:45:27.525600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:45:27.530152 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:45:27.530389 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:45:27.533099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:45:27.541728 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:45:27.562490 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:45:27.565566 augenrules[1489]: No rules Dec 12 18:45:27.568041 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:45:27.568682 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:45:27.635832 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:45:27.760942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:45:27.805611 systemd-networkd[1443]: lo: Link UP Dec 12 18:45:27.805630 systemd-networkd[1443]: lo: Gained carrier Dec 12 18:45:27.807592 systemd-networkd[1443]: Enumeration completed Dec 12 18:45:27.807695 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:45:27.810642 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:27.810654 systemd-networkd[1443]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:45:27.812039 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:45:27.812636 systemd-networkd[1443]: eth0: Link UP Dec 12 18:45:27.812827 systemd-networkd[1443]: eth0: Gained carrier Dec 12 18:45:27.812864 systemd-networkd[1443]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:45:27.816825 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:45:27.851084 systemd-resolved[1444]: Positive Trust Anchors: Dec 12 18:45:27.852144 systemd-resolved[1444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:45:27.852183 systemd-resolved[1444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:45:27.858566 systemd-resolved[1444]: Defaulting to hostname 'linux'. Dec 12 18:45:27.860167 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:45:27.861878 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:45:27.862826 systemd[1]: Reached target network.target - Network. Dec 12 18:45:27.863575 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:45:27.867501 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:45:27.868557 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:45:27.869592 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:45:27.870401 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:45:27.871171 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:45:27.871912 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:45:27.872681 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:45:27.872719 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:45:27.880002 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:45:27.880906 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:45:27.881898 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:45:27.882660 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:45:27.884635 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:45:27.887508 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:45:27.890288 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:45:27.891210 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:45:27.891963 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:45:27.895321 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:45:27.896649 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:45:27.898041 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:45:27.899737 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:45:27.900589 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:45:27.901510 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:45:27.901547 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:45:27.902762 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:45:27.906416 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 18:45:27.908357 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:45:27.910407 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:45:27.915042 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:45:27.918741 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:45:27.921216 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:45:27.922747 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:45:27.926331 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:45:27.931225 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:45:27.947468 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:45:27.957522 jq[1516]: false Dec 12 18:45:27.953795 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:45:27.971407 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:45:27.975419 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:45:27.973502 oslogin_cache_refresh[1518]: Refreshing passwd entry cache Dec 12 18:45:27.977018 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing passwd entry cache Dec 12 18:45:27.976043 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:45:27.977538 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:45:27.981859 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting users, quitting Dec 12 18:45:27.981859 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:45:27.981859 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Refreshing group entry cache Dec 12 18:45:27.981859 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Failure getting groups, quitting Dec 12 18:45:27.981859 google_oslogin_nss_cache[1518]: oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:45:27.981286 oslogin_cache_refresh[1518]: Failure getting users, quitting Dec 12 18:45:27.981304 oslogin_cache_refresh[1518]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:45:27.981348 oslogin_cache_refresh[1518]: Refreshing group entry cache Dec 12 18:45:27.981791 oslogin_cache_refresh[1518]: Failure getting groups, quitting Dec 12 18:45:27.981802 oslogin_cache_refresh[1518]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:45:27.989398 extend-filesystems[1517]: Found /dev/sda6 Dec 12 18:45:27.989494 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:45:27.993982 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:45:27.995661 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:45:27.996320 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:45:27.996666 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:45:28.000595 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:45:28.005978 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:45:28.006272 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:45:28.027244 extend-filesystems[1517]: Found /dev/sda9 Dec 12 18:45:28.035875 jq[1531]: true Dec 12 18:45:28.028887 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:45:28.040438 extend-filesystems[1517]: Checking size of /dev/sda9 Dec 12 18:45:28.075119 tar[1535]: linux-amd64/LICENSE Dec 12 18:45:28.076645 tar[1535]: linux-amd64/helm Dec 12 18:45:28.085786 jq[1550]: true Dec 12 18:45:28.106841 extend-filesystems[1517]: Resized partition /dev/sda9 Dec 12 18:45:28.107924 update_engine[1529]: I20251212 18:45:28.107460 1529 main.cc:92] Flatcar Update Engine starting Dec 12 18:45:28.113473 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:45:28.114473 dbus-daemon[1514]: [system] SELinux support is enabled Dec 12 18:45:28.113772 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:45:28.114725 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:45:28.118179 coreos-metadata[1513]: Dec 12 18:45:28.117 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:45:28.119027 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:45:28.119059 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:45:28.121375 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:45:28.121396 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:45:28.124093 update_engine[1529]: I20251212 18:45:28.123940 1529 update_check_scheduler.cc:74] Next update check in 11m22s Dec 12 18:45:28.125751 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:45:28.133012 extend-filesystems[1563]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:45:28.136605 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:45:28.148137 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Dec 12 18:45:28.181408 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:45:28.181446 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:45:28.182462 systemd-logind[1528]: New seat seat0. Dec 12 18:45:28.189021 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:45:28.280213 bash[1580]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:45:28.284483 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:45:28.300858 systemd[1]: Starting sshkeys.service... Dec 12 18:45:28.333826 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 18:45:28.337346 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 18:45:28.428801 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:45:28.464026 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:45:28.470025 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:45:28.492974 coreos-metadata[1589]: Dec 12 18:45:28.492 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Dec 12 18:45:28.494904 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:45:28.495411 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:45:28.500339 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:45:28.501294 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Dec 12 18:45:28.509995 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:45:28.514668 containerd[1536]: time="2025-12-12T18:45:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:45:28.516266 extend-filesystems[1563]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 12 18:45:28.516266 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 10 Dec 12 18:45:28.516266 extend-filesystems[1563]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Dec 12 18:45:28.528882 extend-filesystems[1517]: Resized filesystem in /dev/sda9 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.516396840Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534476620Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.17µs" Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534498060Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534515100Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534669550Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534683810Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534706190Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534767240Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534777890Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.534998450Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.535012090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:45:28.535590 containerd[1536]: time="2025-12-12T18:45:28.535021750Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:45:28.518382 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:45:28.535948 containerd[1536]: time="2025-12-12T18:45:28.535029090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:45:28.519381 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:45:28.527448 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:45:28.534072 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:45:28.540607 containerd[1536]: time="2025-12-12T18:45:28.536149020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:45:28.540607 containerd[1536]: time="2025-12-12T18:45:28.536396270Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:45:28.540607 containerd[1536]: time="2025-12-12T18:45:28.536605960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:45:28.540607 containerd[1536]: time="2025-12-12T18:45:28.536616260Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:45:28.540607 containerd[1536]: time="2025-12-12T18:45:28.537196250Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:45:28.540607 containerd[1536]: time="2025-12-12T18:45:28.537490600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:45:28.540607 containerd[1536]: time="2025-12-12T18:45:28.537768370Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:45:28.538342 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:45:28.540366 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:45:28.541127 containerd[1536]: time="2025-12-12T18:45:28.540991230Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:45:28.541127 containerd[1536]: time="2025-12-12T18:45:28.541030620Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:45:28.541127 containerd[1536]: time="2025-12-12T18:45:28.541043720Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:45:28.541127 containerd[1536]: time="2025-12-12T18:45:28.541054570Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:45:28.541127 containerd[1536]: time="2025-12-12T18:45:28.541065910Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:45:28.541127 containerd[1536]: time="2025-12-12T18:45:28.541074930Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:45:28.541127 containerd[1536]: time="2025-12-12T18:45:28.541086830Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:45:28.541127 containerd[1536]: time="2025-12-12T18:45:28.541096920Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541510110Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541527410Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541536000Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541546740Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541648230Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541665780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541678340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541688280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541703990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541713520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541722950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541732370Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541741470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541750640Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:45:28.541838 containerd[1536]: time="2025-12-12T18:45:28.541759220Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:45:28.542101 containerd[1536]: time="2025-12-12T18:45:28.541799180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:45:28.542101 containerd[1536]: time="2025-12-12T18:45:28.541810910Z" level=info msg="Start snapshots syncer" Dec 12 18:45:28.542582 containerd[1536]: time="2025-12-12T18:45:28.542159250Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:45:28.542582 containerd[1536]: time="2025-12-12T18:45:28.542448740Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:45:28.542693 containerd[1536]: time="2025-12-12T18:45:28.542493670Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:45:28.542758 containerd[1536]: time="2025-12-12T18:45:28.542741820Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:45:28.542949 containerd[1536]: time="2025-12-12T18:45:28.542925430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543001540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543016500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543025770Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543036820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543046010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543055700Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543079100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543089140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:45:28.543339 containerd[1536]: time="2025-12-12T18:45:28.543098260Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:45:28.543531 containerd[1536]: time="2025-12-12T18:45:28.543516190Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:45:28.543634 containerd[1536]: time="2025-12-12T18:45:28.543619030Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:45:28.543688 containerd[1536]: time="2025-12-12T18:45:28.543675690Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:45:28.543732 containerd[1536]: time="2025-12-12T18:45:28.543719950Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:45:28.543769 containerd[1536]: time="2025-12-12T18:45:28.543759020Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:45:28.543897 containerd[1536]: time="2025-12-12T18:45:28.543798360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:45:28.543897 containerd[1536]: time="2025-12-12T18:45:28.543816980Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:45:28.543897 containerd[1536]: time="2025-12-12T18:45:28.543832730Z" level=info msg="runtime interface created" Dec 12 18:45:28.543897 containerd[1536]: time="2025-12-12T18:45:28.543837920Z" level=info msg="created NRI interface" Dec 12 18:45:28.543897 containerd[1536]: time="2025-12-12T18:45:28.543844780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:45:28.543897 containerd[1536]: time="2025-12-12T18:45:28.543854500Z" level=info msg="Connect containerd service" Dec 12 18:45:28.543897 containerd[1536]: time="2025-12-12T18:45:28.543869910Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:45:28.545128 containerd[1536]: time="2025-12-12T18:45:28.544982410Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:45:28.577564 systemd-networkd[1443]: eth0: DHCPv4 address 172.237.139.125/24, gateway 172.237.139.1 acquired from 23.192.120.14 Dec 12 18:45:28.578810 dbus-daemon[1514]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1443 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 18:45:28.585936 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Dec 12 18:45:28.587457 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 18:45:28.684988 containerd[1536]: time="2025-12-12T18:45:28.684605200Z" level=info msg="Start subscribing containerd event" Dec 12 18:45:28.684988 containerd[1536]: time="2025-12-12T18:45:28.684710150Z" level=info msg="Start recovering state" Dec 12 18:45:28.685523 containerd[1536]: time="2025-12-12T18:45:28.685352950Z" level=info msg="Start event monitor" Dec 12 18:45:28.685608 containerd[1536]: time="2025-12-12T18:45:28.685593620Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:45:28.685755 containerd[1536]: time="2025-12-12T18:45:28.685721780Z" level=info msg="Start streaming server" Dec 12 18:45:28.686011 containerd[1536]: time="2025-12-12T18:45:28.685800640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:45:28.686011 containerd[1536]: time="2025-12-12T18:45:28.685879100Z" level=info msg="runtime interface starting up..." Dec 12 18:45:28.686011 containerd[1536]: time="2025-12-12T18:45:28.685887120Z" level=info msg="starting plugins..." Dec 12 18:45:28.686011 containerd[1536]: time="2025-12-12T18:45:28.685905040Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:45:28.696481 containerd[1536]: time="2025-12-12T18:45:28.692459190Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:45:28.696481 containerd[1536]: time="2025-12-12T18:45:28.692548550Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:45:28.696481 containerd[1536]: time="2025-12-12T18:45:28.696018860Z" level=info msg="containerd successfully booted in 0.183234s" Dec 12 18:45:28.692781 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:45:28.702726 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 18:45:28.705039 dbus-daemon[1514]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 18:45:28.706591 dbus-daemon[1514]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1622 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 18:45:28.714514 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 18:45:28.795565 polkitd[1632]: Started polkitd version 126 Dec 12 18:45:28.799978 polkitd[1632]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 18:45:28.800270 polkitd[1632]: Loading rules from directory /run/polkit-1/rules.d Dec 12 18:45:28.800320 polkitd[1632]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:45:28.800529 polkitd[1632]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 18:45:28.800562 polkitd[1632]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 18:45:28.800598 polkitd[1632]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 18:45:28.801075 polkitd[1632]: Finished loading, compiling and executing 2 rules Dec 12 18:45:28.801346 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 18:45:28.803385 dbus-daemon[1514]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 18:45:28.803802 polkitd[1632]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 18:45:28.815505 systemd-resolved[1444]: System hostname changed to '172-237-139-125'. Dec 12 18:45:28.815569 systemd-hostnamed[1622]: Hostname set to <172-237-139-125> (transient) Dec 12 18:45:28.826664 tar[1535]: linux-amd64/README.md Dec 12 18:45:28.844547 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:45:28.864898 systemd-timesyncd[1471]: Contacted time server 23.186.168.123:123 (0.flatcar.pool.ntp.org). Dec 12 18:45:28.864957 systemd-timesyncd[1471]: Initial clock synchronization to Fri 2025-12-12 18:45:28.953506 UTC. Dec 12 18:45:29.129774 coreos-metadata[1513]: Dec 12 18:45:29.129 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:45:29.226103 coreos-metadata[1513]: Dec 12 18:45:29.226 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Dec 12 18:45:29.403412 systemd-networkd[1443]: eth0: Gained IPv6LL Dec 12 18:45:29.406629 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:45:29.408244 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:45:29.410018 coreos-metadata[1513]: Dec 12 18:45:29.409 INFO Fetch successful Dec 12 18:45:29.410018 coreos-metadata[1513]: Dec 12 18:45:29.409 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Dec 12 18:45:29.411641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:29.415511 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:45:29.436469 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:45:29.503853 coreos-metadata[1589]: Dec 12 18:45:29.503 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Dec 12 18:45:29.597328 coreos-metadata[1589]: Dec 12 18:45:29.596 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Dec 12 18:45:29.671980 coreos-metadata[1513]: Dec 12 18:45:29.669 INFO Fetch successful Dec 12 18:45:29.735241 coreos-metadata[1589]: Dec 12 18:45:29.735 INFO Fetch successful Dec 12 18:45:29.764041 update-ssh-keys[1669]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:45:29.764727 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 18:45:29.769014 systemd[1]: Finished sshkeys.service. Dec 12 18:45:29.810845 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 18:45:29.813075 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:45:30.383649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:30.385035 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:45:30.386817 systemd[1]: Startup finished in 2.886s (kernel) + 8.181s (initrd) + 5.611s (userspace) = 16.678s. Dec 12 18:45:30.395736 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:45:30.918676 kubelet[1688]: E1212 18:45:30.918625 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:45:30.921925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:45:30.922143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:45:30.922537 systemd[1]: kubelet.service: Consumed 908ms CPU time, 266.1M memory peak. Dec 12 18:45:31.020436 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:45:31.021971 systemd[1]: Started sshd@0-172.237.139.125:22-139.178.68.195:34010.service - OpenSSH per-connection server daemon (139.178.68.195:34010). Dec 12 18:45:31.378241 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 34010 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:31.380352 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:31.388102 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:45:31.389711 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:45:31.399091 systemd-logind[1528]: New session 1 of user core. Dec 12 18:45:31.416792 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:45:31.420332 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:45:31.439794 (systemd)[1705]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:45:31.442809 systemd-logind[1528]: New session c1 of user core. Dec 12 18:45:31.574700 systemd[1705]: Queued start job for default target default.target. Dec 12 18:45:31.586717 systemd[1705]: Created slice app.slice - User Application Slice. Dec 12 18:45:31.586742 systemd[1705]: Reached target paths.target - Paths. Dec 12 18:45:31.586785 systemd[1705]: Reached target timers.target - Timers. Dec 12 18:45:31.588412 systemd[1705]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:45:31.601086 systemd[1705]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:45:31.601239 systemd[1705]: Reached target sockets.target - Sockets. Dec 12 18:45:31.601280 systemd[1705]: Reached target basic.target - Basic System. Dec 12 18:45:31.601324 systemd[1705]: Reached target default.target - Main User Target. Dec 12 18:45:31.601356 systemd[1705]: Startup finished in 152ms. Dec 12 18:45:31.601632 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:45:31.612227 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:45:31.870003 systemd[1]: Started sshd@1-172.237.139.125:22-139.178.68.195:34022.service - OpenSSH per-connection server daemon (139.178.68.195:34022). Dec 12 18:45:32.220055 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 34022 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:32.222012 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:32.227484 systemd-logind[1528]: New session 2 of user core. Dec 12 18:45:32.233242 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:45:32.468668 sshd[1719]: Connection closed by 139.178.68.195 port 34022 Dec 12 18:45:32.469306 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:32.474401 systemd[1]: sshd@1-172.237.139.125:22-139.178.68.195:34022.service: Deactivated successfully. Dec 12 18:45:32.476894 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:45:32.477925 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:45:32.479685 systemd-logind[1528]: Removed session 2. Dec 12 18:45:32.535884 systemd[1]: Started sshd@2-172.237.139.125:22-139.178.68.195:34032.service - OpenSSH per-connection server daemon (139.178.68.195:34032). Dec 12 18:45:32.899310 sshd[1725]: Accepted publickey for core from 139.178.68.195 port 34032 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:32.901331 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:32.909180 systemd-logind[1528]: New session 3 of user core. Dec 12 18:45:32.918262 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:45:33.152302 sshd[1728]: Connection closed by 139.178.68.195 port 34032 Dec 12 18:45:33.153509 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:33.157728 systemd[1]: sshd@2-172.237.139.125:22-139.178.68.195:34032.service: Deactivated successfully. Dec 12 18:45:33.159845 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:45:33.160810 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:45:33.162098 systemd-logind[1528]: Removed session 3. Dec 12 18:45:33.224231 systemd[1]: Started sshd@3-172.237.139.125:22-139.178.68.195:34038.service - OpenSSH per-connection server daemon (139.178.68.195:34038). Dec 12 18:45:33.567717 sshd[1734]: Accepted publickey for core from 139.178.68.195 port 34038 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:33.569248 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:33.574548 systemd-logind[1528]: New session 4 of user core. Dec 12 18:45:33.581231 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:45:33.822843 sshd[1737]: Connection closed by 139.178.68.195 port 34038 Dec 12 18:45:33.823542 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:33.827748 systemd[1]: sshd@3-172.237.139.125:22-139.178.68.195:34038.service: Deactivated successfully. Dec 12 18:45:33.829502 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:45:33.830341 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:45:33.831519 systemd-logind[1528]: Removed session 4. Dec 12 18:45:33.896448 systemd[1]: Started sshd@4-172.237.139.125:22-139.178.68.195:34044.service - OpenSSH per-connection server daemon (139.178.68.195:34044). Dec 12 18:45:34.246494 sshd[1743]: Accepted publickey for core from 139.178.68.195 port 34044 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:34.248291 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:34.253375 systemd-logind[1528]: New session 5 of user core. Dec 12 18:45:34.259231 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:45:34.457047 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:45:34.457396 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:34.473132 sudo[1747]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:34.525388 sshd[1746]: Connection closed by 139.178.68.195 port 34044 Dec 12 18:45:34.526174 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:34.532722 systemd[1]: sshd@4-172.237.139.125:22-139.178.68.195:34044.service: Deactivated successfully. Dec 12 18:45:34.538587 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:45:34.540046 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:45:34.542712 systemd-logind[1528]: Removed session 5. Dec 12 18:45:34.591647 systemd[1]: Started sshd@5-172.237.139.125:22-139.178.68.195:34046.service - OpenSSH per-connection server daemon (139.178.68.195:34046). Dec 12 18:45:34.954708 sshd[1753]: Accepted publickey for core from 139.178.68.195 port 34046 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:34.956751 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:34.963127 systemd-logind[1528]: New session 6 of user core. Dec 12 18:45:34.968248 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:45:35.154780 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:45:35.155159 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:35.160220 sudo[1758]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:35.166826 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:45:35.167178 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:35.178101 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:45:35.222324 augenrules[1780]: No rules Dec 12 18:45:35.224886 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:45:35.225245 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:45:35.226811 sudo[1757]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:35.278490 sshd[1756]: Connection closed by 139.178.68.195 port 34046 Dec 12 18:45:35.279155 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:35.284977 systemd[1]: sshd@5-172.237.139.125:22-139.178.68.195:34046.service: Deactivated successfully. Dec 12 18:45:35.287184 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:45:35.288398 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:45:35.289671 systemd-logind[1528]: Removed session 6. Dec 12 18:45:35.344409 systemd[1]: Started sshd@6-172.237.139.125:22-139.178.68.195:34054.service - OpenSSH per-connection server daemon (139.178.68.195:34054). Dec 12 18:45:35.687799 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 34054 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:45:35.689879 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:45:35.695932 systemd-logind[1528]: New session 7 of user core. Dec 12 18:45:35.701249 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:45:35.887456 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:45:35.887850 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:45:36.202295 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:45:36.224538 (dockerd)[1811]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:45:36.457520 dockerd[1811]: time="2025-12-12T18:45:36.457055260Z" level=info msg="Starting up" Dec 12 18:45:36.458338 dockerd[1811]: time="2025-12-12T18:45:36.458302700Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:45:36.474726 dockerd[1811]: time="2025-12-12T18:45:36.474635057Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:45:36.516956 dockerd[1811]: time="2025-12-12T18:45:36.516769148Z" level=info msg="Loading containers: start." Dec 12 18:45:36.528157 kernel: Initializing XFRM netlink socket Dec 12 18:45:36.798704 systemd-networkd[1443]: docker0: Link UP Dec 12 18:45:36.802558 dockerd[1811]: time="2025-12-12T18:45:36.802512604Z" level=info msg="Loading containers: done." Dec 12 18:45:36.819918 dockerd[1811]: time="2025-12-12T18:45:36.819868059Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:45:36.820089 dockerd[1811]: time="2025-12-12T18:45:36.819944460Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:45:36.820089 dockerd[1811]: time="2025-12-12T18:45:36.820044795Z" level=info msg="Initializing buildkit" Dec 12 18:45:36.820651 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2374877170-merged.mount: Deactivated successfully. Dec 12 18:45:36.842806 dockerd[1811]: time="2025-12-12T18:45:36.842773190Z" level=info msg="Completed buildkit initialization" Dec 12 18:45:36.850411 dockerd[1811]: time="2025-12-12T18:45:36.850361016Z" level=info msg="Daemon has completed initialization" Dec 12 18:45:36.850594 dockerd[1811]: time="2025-12-12T18:45:36.850525077Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:45:36.850628 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:45:37.428523 containerd[1536]: time="2025-12-12T18:45:37.428443523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 18:45:38.173038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936788765.mount: Deactivated successfully. Dec 12 18:45:39.371139 containerd[1536]: time="2025-12-12T18:45:39.369757515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:39.371139 containerd[1536]: time="2025-12-12T18:45:39.370616825Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Dec 12 18:45:39.371963 containerd[1536]: time="2025-12-12T18:45:39.371938771Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:39.375938 containerd[1536]: time="2025-12-12T18:45:39.375907097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:39.376849 containerd[1536]: time="2025-12-12T18:45:39.376825358Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.948326581s" Dec 12 18:45:39.376932 containerd[1536]: time="2025-12-12T18:45:39.376918208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 12 18:45:39.379816 containerd[1536]: time="2025-12-12T18:45:39.379789001Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 18:45:40.934850 containerd[1536]: time="2025-12-12T18:45:40.934786315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:40.937311 containerd[1536]: time="2025-12-12T18:45:40.936966517Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Dec 12 18:45:40.937745 containerd[1536]: time="2025-12-12T18:45:40.937685037Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:40.939634 containerd[1536]: time="2025-12-12T18:45:40.939594660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:40.940815 containerd[1536]: time="2025-12-12T18:45:40.940637275Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.560820336s" Dec 12 18:45:40.940815 containerd[1536]: time="2025-12-12T18:45:40.940671733Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 12 18:45:40.942204 containerd[1536]: time="2025-12-12T18:45:40.941131343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 18:45:41.174667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:45:41.177916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:41.398897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:41.414487 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:45:41.454905 kubelet[2090]: E1212 18:45:41.454854 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:45:41.463845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:45:41.464047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:45:41.465290 systemd[1]: kubelet.service: Consumed 225ms CPU time, 110.3M memory peak. Dec 12 18:45:42.231778 containerd[1536]: time="2025-12-12T18:45:42.231718555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:42.232720 containerd[1536]: time="2025-12-12T18:45:42.232617248Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Dec 12 18:45:42.233195 containerd[1536]: time="2025-12-12T18:45:42.233166971Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:42.235342 containerd[1536]: time="2025-12-12T18:45:42.235311274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:42.236573 containerd[1536]: time="2025-12-12T18:45:42.236182524Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.295022913s" Dec 12 18:45:42.236573 containerd[1536]: time="2025-12-12T18:45:42.236213454Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 12 18:45:42.237088 containerd[1536]: time="2025-12-12T18:45:42.237069765Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 18:45:43.462844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568623692.mount: Deactivated successfully. Dec 12 18:45:43.824603 containerd[1536]: time="2025-12-12T18:45:43.824461756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:43.825747 containerd[1536]: time="2025-12-12T18:45:43.825559040Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Dec 12 18:45:43.826308 containerd[1536]: time="2025-12-12T18:45:43.826268261Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:43.827979 containerd[1536]: time="2025-12-12T18:45:43.827940747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:43.828545 containerd[1536]: time="2025-12-12T18:45:43.828508606Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.591351032s" Dec 12 18:45:43.828626 containerd[1536]: time="2025-12-12T18:45:43.828610530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 12 18:45:43.829523 containerd[1536]: time="2025-12-12T18:45:43.829489010Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 18:45:44.480392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404323555.mount: Deactivated successfully. Dec 12 18:45:45.160605 containerd[1536]: time="2025-12-12T18:45:45.160558162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:45.161845 containerd[1536]: time="2025-12-12T18:45:45.161682062Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Dec 12 18:45:45.162726 containerd[1536]: time="2025-12-12T18:45:45.162696379Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:45.165044 containerd[1536]: time="2025-12-12T18:45:45.165023062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:45.166042 containerd[1536]: time="2025-12-12T18:45:45.166004936Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.336398388s" Dec 12 18:45:45.166087 containerd[1536]: time="2025-12-12T18:45:45.166044567Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 12 18:45:45.166810 containerd[1536]: time="2025-12-12T18:45:45.166765651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:45:45.759356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664521300.mount: Deactivated successfully. Dec 12 18:45:45.764980 containerd[1536]: time="2025-12-12T18:45:45.764912874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:45.765650 containerd[1536]: time="2025-12-12T18:45:45.765607603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:45:45.766263 containerd[1536]: time="2025-12-12T18:45:45.766205525Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:45.767789 containerd[1536]: time="2025-12-12T18:45:45.767751166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:45:45.769141 containerd[1536]: time="2025-12-12T18:45:45.768640339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 601.846702ms" Dec 12 18:45:45.769141 containerd[1536]: time="2025-12-12T18:45:45.768672130Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:45:45.769733 containerd[1536]: time="2025-12-12T18:45:45.769702548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 18:45:46.456148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3891772134.mount: Deactivated successfully. Dec 12 18:45:48.143059 containerd[1536]: time="2025-12-12T18:45:48.143001190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:48.144410 containerd[1536]: time="2025-12-12T18:45:48.144223632Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Dec 12 18:45:48.144946 containerd[1536]: time="2025-12-12T18:45:48.144921123Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:48.147586 containerd[1536]: time="2025-12-12T18:45:48.147560817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:45:48.148752 containerd[1536]: time="2025-12-12T18:45:48.148721554Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.378990192s" Dec 12 18:45:48.148799 containerd[1536]: time="2025-12-12T18:45:48.148754063Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 12 18:45:50.012049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:50.012665 systemd[1]: kubelet.service: Consumed 225ms CPU time, 110.3M memory peak. Dec 12 18:45:50.014905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:50.049908 systemd[1]: Reload requested from client PID 2247 ('systemctl') (unit session-7.scope)... Dec 12 18:45:50.049946 systemd[1]: Reloading... Dec 12 18:45:50.211139 zram_generator::config[2294]: No configuration found. Dec 12 18:45:50.424439 systemd[1]: Reloading finished in 373 ms. Dec 12 18:45:50.479644 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:45:50.479760 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:45:50.480100 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:50.480226 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.3M memory peak. Dec 12 18:45:50.482290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:50.657212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:50.669317 (kubelet)[2345]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:45:50.722885 kubelet[2345]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:45:50.722885 kubelet[2345]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:45:50.722885 kubelet[2345]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:45:50.722885 kubelet[2345]: I1212 18:45:50.722818 2345 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:45:50.860432 kubelet[2345]: I1212 18:45:50.860374 2345 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:45:50.860432 kubelet[2345]: I1212 18:45:50.860403 2345 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:45:50.860696 kubelet[2345]: I1212 18:45:50.860667 2345 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:45:50.890137 kubelet[2345]: E1212 18:45:50.890004 2345 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.237.139.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.237.139.125:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:45:50.890952 kubelet[2345]: I1212 18:45:50.890910 2345 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:45:50.903410 kubelet[2345]: I1212 18:45:50.903386 2345 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:45:50.907672 kubelet[2345]: I1212 18:45:50.907644 2345 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:45:50.907917 kubelet[2345]: I1212 18:45:50.907876 2345 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:45:50.908086 kubelet[2345]: I1212 18:45:50.907910 2345 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-139-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:45:50.908784 kubelet[2345]: I1212 18:45:50.908748 2345 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:45:50.908784 kubelet[2345]: I1212 18:45:50.908769 2345 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:45:50.908935 kubelet[2345]: I1212 18:45:50.908901 2345 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:45:50.913102 kubelet[2345]: I1212 18:45:50.912946 2345 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:45:50.913102 kubelet[2345]: I1212 18:45:50.913034 2345 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:45:50.913102 kubelet[2345]: I1212 18:45:50.913054 2345 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:45:50.913102 kubelet[2345]: I1212 18:45:50.913068 2345 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:45:50.916811 kubelet[2345]: W1212 18:45:50.916774 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.237.139.125:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-139-125&limit=500&resourceVersion=0": dial tcp 172.237.139.125:6443: connect: connection refused Dec 12 18:45:50.917059 kubelet[2345]: E1212 18:45:50.917015 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.237.139.125:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-237-139-125&limit=500&resourceVersion=0\": dial tcp 172.237.139.125:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:45:50.918137 kubelet[2345]: I1212 18:45:50.917216 2345 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:45:50.918137 kubelet[2345]: I1212 18:45:50.917577 2345 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:45:50.918137 kubelet[2345]: W1212 18:45:50.917645 2345 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:45:50.923355 kubelet[2345]: I1212 18:45:50.923324 2345 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:45:50.923509 kubelet[2345]: I1212 18:45:50.923498 2345 server.go:1287] "Started kubelet" Dec 12 18:45:50.926376 kubelet[2345]: W1212 18:45:50.926317 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.237.139.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.237.139.125:6443: connect: connection refused Dec 12 18:45:50.926417 kubelet[2345]: E1212 18:45:50.926376 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.237.139.125:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.237.139.125:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:45:50.926969 kubelet[2345]: I1212 18:45:50.926825 2345 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:45:50.929734 kubelet[2345]: I1212 18:45:50.929705 2345 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:45:50.933132 kubelet[2345]: I1212 18:45:50.932517 2345 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:45:50.933132 kubelet[2345]: I1212 18:45:50.932875 2345 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:45:50.933847 kubelet[2345]: I1212 18:45:50.933800 2345 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:45:50.935583 kubelet[2345]: E1212 18:45:50.934062 2345 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.237.139.125:6443/api/v1/namespaces/default/events\": dial tcp 172.237.139.125:6443: connect: connection refused" event="&Event{ObjectMeta:{172-237-139-125.18808c26809a0073 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-237-139-125,UID:172-237-139-125,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-237-139-125,},FirstTimestamp:2025-12-12 18:45:50.923456627 +0000 UTC m=+0.246920168,LastTimestamp:2025-12-12 18:45:50.923456627 +0000 UTC m=+0.246920168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-237-139-125,}" Dec 12 18:45:50.935961 kubelet[2345]: I1212 18:45:50.935929 2345 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:45:50.936163 kubelet[2345]: E1212 18:45:50.936102 2345 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-139-125\" not found" Dec 12 18:45:50.936786 kubelet[2345]: I1212 18:45:50.936568 2345 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:45:50.936825 kubelet[2345]: I1212 18:45:50.936815 2345 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:45:50.937662 kubelet[2345]: W1212 18:45:50.937521 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.237.139.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.237.139.125:6443: connect: connection refused Dec 12 18:45:50.937662 kubelet[2345]: E1212 18:45:50.937565 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.237.139.125:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.237.139.125:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:45:50.937662 kubelet[2345]: E1212 18:45:50.937619 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.139.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-139-125?timeout=10s\": dial tcp 172.237.139.125:6443: connect: connection refused" interval="200ms" Dec 12 18:45:50.938386 kubelet[2345]: I1212 18:45:50.937779 2345 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:45:50.938386 kubelet[2345]: I1212 18:45:50.937856 2345 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:45:50.940667 kubelet[2345]: I1212 18:45:50.940639 2345 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:45:50.945445 kubelet[2345]: I1212 18:45:50.945423 2345 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:45:50.969200 kubelet[2345]: I1212 18:45:50.969150 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:45:50.972287 kubelet[2345]: I1212 18:45:50.972260 2345 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:45:50.972398 kubelet[2345]: I1212 18:45:50.972381 2345 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:45:50.977563 kubelet[2345]: I1212 18:45:50.975622 2345 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:45:50.977563 kubelet[2345]: I1212 18:45:50.975707 2345 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:45:50.977563 kubelet[2345]: E1212 18:45:50.975790 2345 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:45:50.979737 kubelet[2345]: I1212 18:45:50.979696 2345 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:45:50.979737 kubelet[2345]: I1212 18:45:50.979727 2345 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:45:50.981152 kubelet[2345]: W1212 18:45:50.981033 2345 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.237.139.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.237.139.125:6443: connect: connection refused Dec 12 18:45:50.981229 kubelet[2345]: E1212 18:45:50.981152 2345 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.237.139.125:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.237.139.125:6443: connect: connection refused" logger="UnhandledError" Dec 12 18:45:50.981788 kubelet[2345]: I1212 18:45:50.981311 2345 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:45:50.986471 kubelet[2345]: I1212 18:45:50.986451 2345 policy_none.go:49] "None policy: Start" Dec 12 18:45:50.986577 kubelet[2345]: I1212 18:45:50.986560 2345 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:45:50.986664 kubelet[2345]: I1212 18:45:50.986650 2345 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:45:50.996222 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:45:51.011224 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:45:51.015634 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:45:51.028561 kubelet[2345]: I1212 18:45:51.028528 2345 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:45:51.028777 kubelet[2345]: I1212 18:45:51.028753 2345 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:45:51.028822 kubelet[2345]: I1212 18:45:51.028772 2345 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:45:51.029494 kubelet[2345]: I1212 18:45:51.029388 2345 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:45:51.033024 kubelet[2345]: E1212 18:45:51.032921 2345 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:45:51.033024 kubelet[2345]: E1212 18:45:51.032985 2345 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-237-139-125\" not found" Dec 12 18:45:51.088270 systemd[1]: Created slice kubepods-burstable-podea1a94602a1ddfd128bf458917e139dd.slice - libcontainer container kubepods-burstable-podea1a94602a1ddfd128bf458917e139dd.slice. Dec 12 18:45:51.098215 kubelet[2345]: E1212 18:45:51.098190 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:51.101558 systemd[1]: Created slice kubepods-burstable-pod38e2169ed30199eda1504e30e3595ce3.slice - libcontainer container kubepods-burstable-pod38e2169ed30199eda1504e30e3595ce3.slice. Dec 12 18:45:51.103849 kubelet[2345]: E1212 18:45:51.103678 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:51.105718 systemd[1]: Created slice kubepods-burstable-pod6f7d159e09b1a33687fe641d26fe5e93.slice - libcontainer container kubepods-burstable-pod6f7d159e09b1a33687fe641d26fe5e93.slice. Dec 12 18:45:51.107422 kubelet[2345]: E1212 18:45:51.107404 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:51.131073 kubelet[2345]: I1212 18:45:51.131057 2345 kubelet_node_status.go:75] "Attempting to register node" node="172-237-139-125" Dec 12 18:45:51.131405 kubelet[2345]: E1212 18:45:51.131374 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.139.125:6443/api/v1/nodes\": dial tcp 172.237.139.125:6443: connect: connection refused" node="172-237-139-125" Dec 12 18:45:51.138494 kubelet[2345]: I1212 18:45:51.138471 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38e2169ed30199eda1504e30e3595ce3-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-139-125\" (UID: \"38e2169ed30199eda1504e30e3595ce3\") " pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:51.138567 kubelet[2345]: I1212 18:45:51.138507 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-kubeconfig\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:51.138567 kubelet[2345]: I1212 18:45:51.138534 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea1a94602a1ddfd128bf458917e139dd-kubeconfig\") pod \"kube-scheduler-172-237-139-125\" (UID: \"ea1a94602a1ddfd128bf458917e139dd\") " pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:51.138567 kubelet[2345]: I1212 18:45:51.138549 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38e2169ed30199eda1504e30e3595ce3-ca-certs\") pod \"kube-apiserver-172-237-139-125\" (UID: \"38e2169ed30199eda1504e30e3595ce3\") " pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:51.138644 kubelet[2345]: I1212 18:45:51.138570 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38e2169ed30199eda1504e30e3595ce3-k8s-certs\") pod \"kube-apiserver-172-237-139-125\" (UID: \"38e2169ed30199eda1504e30e3595ce3\") " pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:51.138644 kubelet[2345]: I1212 18:45:51.138589 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-ca-certs\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:51.138644 kubelet[2345]: I1212 18:45:51.138608 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-flexvolume-dir\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:51.138644 kubelet[2345]: I1212 18:45:51.138624 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-k8s-certs\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:51.138736 kubelet[2345]: I1212 18:45:51.138645 2345 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:51.139202 kubelet[2345]: E1212 18:45:51.139171 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.139.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-139-125?timeout=10s\": dial tcp 172.237.139.125:6443: connect: connection refused" interval="400ms" Dec 12 18:45:51.333700 kubelet[2345]: I1212 18:45:51.333591 2345 kubelet_node_status.go:75] "Attempting to register node" node="172-237-139-125" Dec 12 18:45:51.334249 kubelet[2345]: E1212 18:45:51.334172 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.139.125:6443/api/v1/nodes\": dial tcp 172.237.139.125:6443: connect: connection refused" node="172-237-139-125" Dec 12 18:45:51.399079 kubelet[2345]: E1212 18:45:51.398988 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:51.399677 containerd[1536]: time="2025-12-12T18:45:51.399640032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-139-125,Uid:ea1a94602a1ddfd128bf458917e139dd,Namespace:kube-system,Attempt:0,}" Dec 12 18:45:51.406395 kubelet[2345]: E1212 18:45:51.406287 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:51.407291 containerd[1536]: time="2025-12-12T18:45:51.407239546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-139-125,Uid:38e2169ed30199eda1504e30e3595ce3,Namespace:kube-system,Attempt:0,}" Dec 12 18:45:51.408574 kubelet[2345]: E1212 18:45:51.408516 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:51.409535 containerd[1536]: time="2025-12-12T18:45:51.409303708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-139-125,Uid:6f7d159e09b1a33687fe641d26fe5e93,Namespace:kube-system,Attempt:0,}" Dec 12 18:45:51.437532 containerd[1536]: time="2025-12-12T18:45:51.437177910Z" level=info msg="connecting to shim decd83fae4eb8767c5af9c905cac7eb42d7d5ff7b75376842e0cd1a400ac2406" address="unix:///run/containerd/s/98314424ef7456ec10b644af09efb1f5286f69f44caddd5b96a354255e5178c4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:45:51.445315 containerd[1536]: time="2025-12-12T18:45:51.445231781Z" level=info msg="connecting to shim c71868a005b8bfdacf2c01bf79a81b8610808d0a58681e086f41c8f501d5e3a7" address="unix:///run/containerd/s/659867f4f44f343306a1f3046397859ba8661675128d88f62fb9c35b3b625f71" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:45:51.466358 containerd[1536]: time="2025-12-12T18:45:51.463922209Z" level=info msg="connecting to shim 46eff7562bda4057641f1a54f5f9aae645fac8d63fa31c52bcf3739591f2e16f" address="unix:///run/containerd/s/772b75dd9a8c3e300377b63aac5bb1e851f9acbeee27f4bbb9f7f733805a1add" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:45:51.491504 systemd[1]: Started cri-containerd-decd83fae4eb8767c5af9c905cac7eb42d7d5ff7b75376842e0cd1a400ac2406.scope - libcontainer container decd83fae4eb8767c5af9c905cac7eb42d7d5ff7b75376842e0cd1a400ac2406. Dec 12 18:45:51.514644 systemd[1]: Started cri-containerd-c71868a005b8bfdacf2c01bf79a81b8610808d0a58681e086f41c8f501d5e3a7.scope - libcontainer container c71868a005b8bfdacf2c01bf79a81b8610808d0a58681e086f41c8f501d5e3a7. Dec 12 18:45:51.540306 kubelet[2345]: E1212 18:45:51.540249 2345 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.237.139.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-237-139-125?timeout=10s\": dial tcp 172.237.139.125:6443: connect: connection refused" interval="800ms" Dec 12 18:45:51.542269 systemd[1]: Started cri-containerd-46eff7562bda4057641f1a54f5f9aae645fac8d63fa31c52bcf3739591f2e16f.scope - libcontainer container 46eff7562bda4057641f1a54f5f9aae645fac8d63fa31c52bcf3739591f2e16f. Dec 12 18:45:51.594920 containerd[1536]: time="2025-12-12T18:45:51.594390601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-237-139-125,Uid:ea1a94602a1ddfd128bf458917e139dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"decd83fae4eb8767c5af9c905cac7eb42d7d5ff7b75376842e0cd1a400ac2406\"" Dec 12 18:45:51.595635 kubelet[2345]: E1212 18:45:51.595560 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:51.601866 containerd[1536]: time="2025-12-12T18:45:51.601815372Z" level=info msg="CreateContainer within sandbox \"decd83fae4eb8767c5af9c905cac7eb42d7d5ff7b75376842e0cd1a400ac2406\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:45:51.615216 containerd[1536]: time="2025-12-12T18:45:51.614963535Z" level=info msg="Container d908ab9ba4e527395240e0e0023b3b84bbb8c12d0fbd85153626d791ad52614f: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:51.631443 containerd[1536]: time="2025-12-12T18:45:51.631397528Z" level=info msg="CreateContainer within sandbox \"decd83fae4eb8767c5af9c905cac7eb42d7d5ff7b75376842e0cd1a400ac2406\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d908ab9ba4e527395240e0e0023b3b84bbb8c12d0fbd85153626d791ad52614f\"" Dec 12 18:45:51.635488 containerd[1536]: time="2025-12-12T18:45:51.635398948Z" level=info msg="StartContainer for \"d908ab9ba4e527395240e0e0023b3b84bbb8c12d0fbd85153626d791ad52614f\"" Dec 12 18:45:51.636438 containerd[1536]: time="2025-12-12T18:45:51.636316647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-237-139-125,Uid:6f7d159e09b1a33687fe641d26fe5e93,Namespace:kube-system,Attempt:0,} returns sandbox id \"c71868a005b8bfdacf2c01bf79a81b8610808d0a58681e086f41c8f501d5e3a7\"" Dec 12 18:45:51.637519 kubelet[2345]: E1212 18:45:51.637483 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:51.638623 containerd[1536]: time="2025-12-12T18:45:51.638271455Z" level=info msg="connecting to shim d908ab9ba4e527395240e0e0023b3b84bbb8c12d0fbd85153626d791ad52614f" address="unix:///run/containerd/s/98314424ef7456ec10b644af09efb1f5286f69f44caddd5b96a354255e5178c4" protocol=ttrpc version=3 Dec 12 18:45:51.639992 containerd[1536]: time="2025-12-12T18:45:51.639891747Z" level=info msg="CreateContainer within sandbox \"c71868a005b8bfdacf2c01bf79a81b8610808d0a58681e086f41c8f501d5e3a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:45:51.650819 containerd[1536]: time="2025-12-12T18:45:51.650772908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-237-139-125,Uid:38e2169ed30199eda1504e30e3595ce3,Namespace:kube-system,Attempt:0,} returns sandbox id \"46eff7562bda4057641f1a54f5f9aae645fac8d63fa31c52bcf3739591f2e16f\"" Dec 12 18:45:51.651833 containerd[1536]: time="2025-12-12T18:45:51.651794128Z" level=info msg="Container 28b03e000acb559c83ced2fbe8fbb6185f14573df142ab947b709b811e92e225: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:51.653875 kubelet[2345]: E1212 18:45:51.653836 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:51.656440 containerd[1536]: time="2025-12-12T18:45:51.656395220Z" level=info msg="CreateContainer within sandbox \"46eff7562bda4057641f1a54f5f9aae645fac8d63fa31c52bcf3739591f2e16f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:45:51.656865 containerd[1536]: time="2025-12-12T18:45:51.656799378Z" level=info msg="CreateContainer within sandbox \"c71868a005b8bfdacf2c01bf79a81b8610808d0a58681e086f41c8f501d5e3a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"28b03e000acb559c83ced2fbe8fbb6185f14573df142ab947b709b811e92e225\"" Dec 12 18:45:51.657683 containerd[1536]: time="2025-12-12T18:45:51.657664336Z" level=info msg="StartContainer for \"28b03e000acb559c83ced2fbe8fbb6185f14573df142ab947b709b811e92e225\"" Dec 12 18:45:51.659818 containerd[1536]: time="2025-12-12T18:45:51.659795398Z" level=info msg="connecting to shim 28b03e000acb559c83ced2fbe8fbb6185f14573df142ab947b709b811e92e225" address="unix:///run/containerd/s/659867f4f44f343306a1f3046397859ba8661675128d88f62fb9c35b3b625f71" protocol=ttrpc version=3 Dec 12 18:45:51.669150 containerd[1536]: time="2025-12-12T18:45:51.667039412Z" level=info msg="Container 29b447a28b2e00fc998a8990c1e5222fcbc7fd1eb7bbd1b9ee261a824579a446: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:45:51.676467 systemd[1]: Started cri-containerd-d908ab9ba4e527395240e0e0023b3b84bbb8c12d0fbd85153626d791ad52614f.scope - libcontainer container d908ab9ba4e527395240e0e0023b3b84bbb8c12d0fbd85153626d791ad52614f. Dec 12 18:45:51.681428 containerd[1536]: time="2025-12-12T18:45:51.681403369Z" level=info msg="CreateContainer within sandbox \"46eff7562bda4057641f1a54f5f9aae645fac8d63fa31c52bcf3739591f2e16f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"29b447a28b2e00fc998a8990c1e5222fcbc7fd1eb7bbd1b9ee261a824579a446\"" Dec 12 18:45:51.682519 containerd[1536]: time="2025-12-12T18:45:51.682499623Z" level=info msg="StartContainer for \"29b447a28b2e00fc998a8990c1e5222fcbc7fd1eb7bbd1b9ee261a824579a446\"" Dec 12 18:45:51.688807 containerd[1536]: time="2025-12-12T18:45:51.688785806Z" level=info msg="connecting to shim 29b447a28b2e00fc998a8990c1e5222fcbc7fd1eb7bbd1b9ee261a824579a446" address="unix:///run/containerd/s/772b75dd9a8c3e300377b63aac5bb1e851f9acbeee27f4bbb9f7f733805a1add" protocol=ttrpc version=3 Dec 12 18:45:51.690362 systemd[1]: Started cri-containerd-28b03e000acb559c83ced2fbe8fbb6185f14573df142ab947b709b811e92e225.scope - libcontainer container 28b03e000acb559c83ced2fbe8fbb6185f14573df142ab947b709b811e92e225. Dec 12 18:45:51.730502 systemd[1]: Started cri-containerd-29b447a28b2e00fc998a8990c1e5222fcbc7fd1eb7bbd1b9ee261a824579a446.scope - libcontainer container 29b447a28b2e00fc998a8990c1e5222fcbc7fd1eb7bbd1b9ee261a824579a446. Dec 12 18:45:51.741606 kubelet[2345]: I1212 18:45:51.741180 2345 kubelet_node_status.go:75] "Attempting to register node" node="172-237-139-125" Dec 12 18:45:51.742451 kubelet[2345]: E1212 18:45:51.742331 2345 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.237.139.125:6443/api/v1/nodes\": dial tcp 172.237.139.125:6443: connect: connection refused" node="172-237-139-125" Dec 12 18:45:51.776476 containerd[1536]: time="2025-12-12T18:45:51.776441221Z" level=info msg="StartContainer for \"d908ab9ba4e527395240e0e0023b3b84bbb8c12d0fbd85153626d791ad52614f\" returns successfully" Dec 12 18:45:51.820690 containerd[1536]: time="2025-12-12T18:45:51.820621761Z" level=info msg="StartContainer for \"29b447a28b2e00fc998a8990c1e5222fcbc7fd1eb7bbd1b9ee261a824579a446\" returns successfully" Dec 12 18:45:51.831879 containerd[1536]: time="2025-12-12T18:45:51.831614218Z" level=info msg="StartContainer for \"28b03e000acb559c83ced2fbe8fbb6185f14573df142ab947b709b811e92e225\" returns successfully" Dec 12 18:45:51.994338 kubelet[2345]: E1212 18:45:51.992480 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:51.994338 kubelet[2345]: E1212 18:45:51.992605 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:51.994615 kubelet[2345]: E1212 18:45:51.994365 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:51.994698 kubelet[2345]: E1212 18:45:51.994670 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:51.998338 kubelet[2345]: E1212 18:45:51.998312 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:51.998426 kubelet[2345]: E1212 18:45:51.998400 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:52.545759 kubelet[2345]: I1212 18:45:52.545715 2345 kubelet_node_status.go:75] "Attempting to register node" node="172-237-139-125" Dec 12 18:45:53.000219 kubelet[2345]: E1212 18:45:53.000086 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:53.001366 kubelet[2345]: E1212 18:45:53.001341 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:53.002058 kubelet[2345]: E1212 18:45:53.002033 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:53.002178 kubelet[2345]: E1212 18:45:53.002156 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:53.557240 kubelet[2345]: E1212 18:45:53.557176 2345 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:53.639789 kubelet[2345]: E1212 18:45:53.639763 2345 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-237-139-125\" not found" node="172-237-139-125" Dec 12 18:45:53.639949 kubelet[2345]: E1212 18:45:53.639886 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:53.702200 kubelet[2345]: I1212 18:45:53.702156 2345 kubelet_node_status.go:78] "Successfully registered node" node="172-237-139-125" Dec 12 18:45:53.702200 kubelet[2345]: E1212 18:45:53.702198 2345 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-237-139-125\": node \"172-237-139-125\" not found" Dec 12 18:45:53.737632 kubelet[2345]: I1212 18:45:53.737431 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:53.743692 kubelet[2345]: E1212 18:45:53.743640 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-139-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:53.743782 kubelet[2345]: I1212 18:45:53.743709 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:53.745660 kubelet[2345]: E1212 18:45:53.745559 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-139-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:53.745660 kubelet[2345]: I1212 18:45:53.745581 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:53.747290 kubelet[2345]: E1212 18:45:53.747269 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-237-139-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:53.929349 kubelet[2345]: I1212 18:45:53.928754 2345 apiserver.go:52] "Watching apiserver" Dec 12 18:45:53.936993 kubelet[2345]: I1212 18:45:53.936955 2345 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:45:53.999385 kubelet[2345]: I1212 18:45:53.999352 2345 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:54.001220 kubelet[2345]: E1212 18:45:54.001191 2345 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-139-125\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:54.001655 kubelet[2345]: E1212 18:45:54.001340 2345 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:55.617324 systemd[1]: Reload requested from client PID 2624 ('systemctl') (unit session-7.scope)... Dec 12 18:45:55.617341 systemd[1]: Reloading... Dec 12 18:45:55.720147 zram_generator::config[2671]: No configuration found. Dec 12 18:45:55.937528 systemd[1]: Reloading finished in 319 ms. Dec 12 18:45:55.966348 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:55.989850 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:45:55.990251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:55.990338 systemd[1]: kubelet.service: Consumed 694ms CPU time, 130.2M memory peak. Dec 12 18:45:55.992635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:45:56.203405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:45:56.212312 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:45:56.253281 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:45:56.253680 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:45:56.253742 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:45:56.253865 kubelet[2719]: I1212 18:45:56.253833 2719 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:45:56.260465 kubelet[2719]: I1212 18:45:56.260443 2719 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 18:45:56.260541 kubelet[2719]: I1212 18:45:56.260530 2719 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:45:56.260754 kubelet[2719]: I1212 18:45:56.260741 2719 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 18:45:56.262741 kubelet[2719]: I1212 18:45:56.262715 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 18:45:56.266871 kubelet[2719]: I1212 18:45:56.266854 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:45:56.272464 kubelet[2719]: I1212 18:45:56.272450 2719 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:45:56.277692 kubelet[2719]: I1212 18:45:56.277653 2719 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:45:56.278001 kubelet[2719]: I1212 18:45:56.277946 2719 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:45:56.278264 kubelet[2719]: I1212 18:45:56.277988 2719 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-237-139-125","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:45:56.278264 kubelet[2719]: I1212 18:45:56.278264 2719 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:45:56.278382 kubelet[2719]: I1212 18:45:56.278277 2719 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 18:45:56.278382 kubelet[2719]: I1212 18:45:56.278369 2719 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:45:56.279524 kubelet[2719]: I1212 18:45:56.278745 2719 kubelet.go:446] "Attempting to sync node with API server" Dec 12 18:45:56.279524 kubelet[2719]: I1212 18:45:56.278775 2719 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:45:56.279524 kubelet[2719]: I1212 18:45:56.278805 2719 kubelet.go:352] "Adding apiserver pod source" Dec 12 18:45:56.279524 kubelet[2719]: I1212 18:45:56.278817 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:45:56.280129 kubelet[2719]: I1212 18:45:56.280090 2719 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:45:56.280707 kubelet[2719]: I1212 18:45:56.280687 2719 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 18:45:56.281464 kubelet[2719]: I1212 18:45:56.281450 2719 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:45:56.281543 kubelet[2719]: I1212 18:45:56.281534 2719 server.go:1287] "Started kubelet" Dec 12 18:45:56.286776 kubelet[2719]: I1212 18:45:56.286761 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:45:56.297032 kubelet[2719]: I1212 18:45:56.296268 2719 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:45:56.299795 kubelet[2719]: I1212 18:45:56.299781 2719 server.go:479] "Adding debug handlers to kubelet server" Dec 12 18:45:56.305437 kubelet[2719]: I1212 18:45:56.305381 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:45:56.306214 kubelet[2719]: I1212 18:45:56.306198 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:45:56.308367 kubelet[2719]: I1212 18:45:56.308333 2719 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:45:56.308755 kubelet[2719]: E1212 18:45:56.308719 2719 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-237-139-125\" not found" Dec 12 18:45:56.309670 kubelet[2719]: I1212 18:45:56.309636 2719 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:45:56.315838 kubelet[2719]: I1212 18:45:56.314702 2719 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:45:56.315902 kubelet[2719]: I1212 18:45:56.314879 2719 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:45:56.321487 kubelet[2719]: I1212 18:45:56.321383 2719 factory.go:221] Registration of the systemd container factory successfully Dec 12 18:45:56.322642 kubelet[2719]: I1212 18:45:56.321546 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:45:56.324495 kubelet[2719]: E1212 18:45:56.324456 2719 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:45:56.326027 kubelet[2719]: I1212 18:45:56.326010 2719 factory.go:221] Registration of the containerd container factory successfully Dec 12 18:45:56.332862 kubelet[2719]: I1212 18:45:56.332840 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 18:45:56.339251 kubelet[2719]: I1212 18:45:56.339236 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 18:45:56.339355 kubelet[2719]: I1212 18:45:56.339345 2719 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 18:45:56.339450 kubelet[2719]: I1212 18:45:56.339439 2719 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:45:56.339754 kubelet[2719]: I1212 18:45:56.339728 2719 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 18:45:56.339981 kubelet[2719]: E1212 18:45:56.339964 2719 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:45:56.394869 kubelet[2719]: I1212 18:45:56.394835 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:45:56.395340 kubelet[2719]: I1212 18:45:56.395031 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:45:56.395340 kubelet[2719]: I1212 18:45:56.395058 2719 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:45:56.395600 kubelet[2719]: I1212 18:45:56.395585 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:45:56.395669 kubelet[2719]: I1212 18:45:56.395647 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:45:56.395712 kubelet[2719]: I1212 18:45:56.395705 2719 policy_none.go:49] "None policy: Start" Dec 12 18:45:56.395768 kubelet[2719]: I1212 18:45:56.395759 2719 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:45:56.395819 kubelet[2719]: I1212 18:45:56.395811 2719 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:45:56.395959 kubelet[2719]: I1212 18:45:56.395949 2719 state_mem.go:75] "Updated machine memory state" Dec 12 18:45:56.406083 kubelet[2719]: I1212 18:45:56.406065 2719 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 18:45:56.407129 kubelet[2719]: I1212 18:45:56.406946 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:45:56.407129 kubelet[2719]: I1212 18:45:56.406962 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:45:56.408179 kubelet[2719]: I1212 18:45:56.408165 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:45:56.411368 kubelet[2719]: E1212 18:45:56.410685 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:45:56.441740 kubelet[2719]: I1212 18:45:56.441722 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:56.442144 kubelet[2719]: I1212 18:45:56.441945 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:56.442387 kubelet[2719]: I1212 18:45:56.442060 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:56.511142 kubelet[2719]: I1212 18:45:56.511008 2719 kubelet_node_status.go:75] "Attempting to register node" node="172-237-139-125" Dec 12 18:45:56.518133 kubelet[2719]: I1212 18:45:56.518092 2719 kubelet_node_status.go:124] "Node was previously registered" node="172-237-139-125" Dec 12 18:45:56.518395 kubelet[2719]: I1212 18:45:56.518377 2719 kubelet_node_status.go:78] "Successfully registered node" node="172-237-139-125" Dec 12 18:45:56.617164 kubelet[2719]: I1212 18:45:56.617095 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-flexvolume-dir\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:56.617164 kubelet[2719]: I1212 18:45:56.617153 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea1a94602a1ddfd128bf458917e139dd-kubeconfig\") pod \"kube-scheduler-172-237-139-125\" (UID: \"ea1a94602a1ddfd128bf458917e139dd\") " pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:56.617267 kubelet[2719]: I1212 18:45:56.617181 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-usr-share-ca-certificates\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:56.617267 kubelet[2719]: I1212 18:45:56.617205 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38e2169ed30199eda1504e30e3595ce3-ca-certs\") pod \"kube-apiserver-172-237-139-125\" (UID: \"38e2169ed30199eda1504e30e3595ce3\") " pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:56.617267 kubelet[2719]: I1212 18:45:56.617223 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38e2169ed30199eda1504e30e3595ce3-k8s-certs\") pod \"kube-apiserver-172-237-139-125\" (UID: \"38e2169ed30199eda1504e30e3595ce3\") " pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:56.617267 kubelet[2719]: I1212 18:45:56.617245 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38e2169ed30199eda1504e30e3595ce3-usr-share-ca-certificates\") pod \"kube-apiserver-172-237-139-125\" (UID: \"38e2169ed30199eda1504e30e3595ce3\") " pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:56.617267 kubelet[2719]: I1212 18:45:56.617266 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-ca-certs\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:56.617418 kubelet[2719]: I1212 18:45:56.617286 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-k8s-certs\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:56.617418 kubelet[2719]: I1212 18:45:56.617306 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f7d159e09b1a33687fe641d26fe5e93-kubeconfig\") pod \"kube-controller-manager-172-237-139-125\" (UID: \"6f7d159e09b1a33687fe641d26fe5e93\") " pod="kube-system/kube-controller-manager-172-237-139-125" Dec 12 18:45:56.617823 sudo[2752]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 12 18:45:56.618283 sudo[2752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 12 18:45:56.748045 kubelet[2719]: E1212 18:45:56.747992 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:56.748237 kubelet[2719]: E1212 18:45:56.748203 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:56.748593 kubelet[2719]: E1212 18:45:56.748560 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:56.949799 sudo[2752]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:57.285483 kubelet[2719]: I1212 18:45:57.285068 2719 apiserver.go:52] "Watching apiserver" Dec 12 18:45:57.316688 kubelet[2719]: I1212 18:45:57.316661 2719 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:45:57.369965 kubelet[2719]: E1212 18:45:57.369910 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:57.370742 kubelet[2719]: I1212 18:45:57.370671 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:57.373436 kubelet[2719]: I1212 18:45:57.373400 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:57.378417 kubelet[2719]: E1212 18:45:57.378331 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-237-139-125\" already exists" pod="kube-system/kube-scheduler-172-237-139-125" Dec 12 18:45:57.378660 kubelet[2719]: E1212 18:45:57.378441 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:57.388209 kubelet[2719]: I1212 18:45:57.388156 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-237-139-125" podStartSLOduration=1.388128704 podStartE2EDuration="1.388128704s" podCreationTimestamp="2025-12-12 18:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:45:57.372396856 +0000 UTC m=+1.154935449" watchObservedRunningTime="2025-12-12 18:45:57.388128704 +0000 UTC m=+1.170667297" Dec 12 18:45:57.389530 kubelet[2719]: E1212 18:45:57.389050 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-237-139-125\" already exists" pod="kube-system/kube-apiserver-172-237-139-125" Dec 12 18:45:57.389690 kubelet[2719]: E1212 18:45:57.389676 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:57.400710 kubelet[2719]: I1212 18:45:57.400203 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-237-139-125" podStartSLOduration=1.400189945 podStartE2EDuration="1.400189945s" podCreationTimestamp="2025-12-12 18:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:45:57.388913811 +0000 UTC m=+1.171452434" watchObservedRunningTime="2025-12-12 18:45:57.400189945 +0000 UTC m=+1.182728538" Dec 12 18:45:57.400710 kubelet[2719]: I1212 18:45:57.400465 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-237-139-125" podStartSLOduration=1.400459226 podStartE2EDuration="1.400459226s" podCreationTimestamp="2025-12-12 18:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:45:57.399243325 +0000 UTC m=+1.181781918" watchObservedRunningTime="2025-12-12 18:45:57.400459226 +0000 UTC m=+1.182997819" Dec 12 18:45:58.255323 sudo[1793]: pam_unix(sudo:session): session closed for user root Dec 12 18:45:58.306286 sshd[1792]: Connection closed by 139.178.68.195 port 34054 Dec 12 18:45:58.306748 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Dec 12 18:45:58.311470 systemd[1]: sshd@6-172.237.139.125:22-139.178.68.195:34054.service: Deactivated successfully. Dec 12 18:45:58.315744 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:45:58.316007 systemd[1]: session-7.scope: Consumed 3.628s CPU time, 269.1M memory peak. Dec 12 18:45:58.318243 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:45:58.320489 systemd-logind[1528]: Removed session 7. Dec 12 18:45:58.372136 kubelet[2719]: E1212 18:45:58.371749 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:58.372136 kubelet[2719]: E1212 18:45:58.372073 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:45:58.822448 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 18:46:02.451936 kubelet[2719]: I1212 18:46:02.451895 2719 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:46:02.453073 kubelet[2719]: I1212 18:46:02.452801 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:46:02.453163 containerd[1536]: time="2025-12-12T18:46:02.452293932Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:46:02.569356 systemd[1]: Created slice kubepods-besteffort-pod1687a6da_d0cd_482a_a77c_787657b75ce5.slice - libcontainer container kubepods-besteffort-pod1687a6da_d0cd_482a_a77c_787657b75ce5.slice. Dec 12 18:46:02.597257 systemd[1]: Created slice kubepods-burstable-podb5d663a3_5f91_4d81_a2b6_8af2427171bb.slice - libcontainer container kubepods-burstable-podb5d663a3_5f91_4d81_a2b6_8af2427171bb.slice. Dec 12 18:46:02.658035 kubelet[2719]: I1212 18:46:02.657761 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1687a6da-d0cd-482a-a77c-787657b75ce5-kube-proxy\") pod \"kube-proxy-9xbl9\" (UID: \"1687a6da-d0cd-482a-a77c-787657b75ce5\") " pod="kube-system/kube-proxy-9xbl9" Dec 12 18:46:02.658035 kubelet[2719]: I1212 18:46:02.657812 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-etc-cni-netd\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658035 kubelet[2719]: I1212 18:46:02.657836 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-host-proc-sys-kernel\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658035 kubelet[2719]: I1212 18:46:02.657855 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkczl\" (UniqueName: \"kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-kube-api-access-fkczl\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658035 kubelet[2719]: I1212 18:46:02.657884 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lh7\" (UniqueName: \"kubernetes.io/projected/1687a6da-d0cd-482a-a77c-787657b75ce5-kube-api-access-72lh7\") pod \"kube-proxy-9xbl9\" (UID: \"1687a6da-d0cd-482a-a77c-787657b75ce5\") " pod="kube-system/kube-proxy-9xbl9" Dec 12 18:46:02.658342 kubelet[2719]: I1212 18:46:02.657901 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-run\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658342 kubelet[2719]: I1212 18:46:02.657962 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-lib-modules\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658342 kubelet[2719]: I1212 18:46:02.658004 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-xtables-lock\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658342 kubelet[2719]: I1212 18:46:02.658044 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-cgroup\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658342 kubelet[2719]: I1212 18:46:02.658063 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5d663a3-5f91-4d81-a2b6-8af2427171bb-clustermesh-secrets\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658342 kubelet[2719]: I1212 18:46:02.658078 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-host-proc-sys-net\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658653 kubelet[2719]: I1212 18:46:02.658095 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1687a6da-d0cd-482a-a77c-787657b75ce5-xtables-lock\") pod \"kube-proxy-9xbl9\" (UID: \"1687a6da-d0cd-482a-a77c-787657b75ce5\") " pod="kube-system/kube-proxy-9xbl9" Dec 12 18:46:02.658653 kubelet[2719]: I1212 18:46:02.658125 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-bpf-maps\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658653 kubelet[2719]: I1212 18:46:02.658139 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-hostproc\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658653 kubelet[2719]: I1212 18:46:02.658153 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-hubble-tls\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658653 kubelet[2719]: I1212 18:46:02.658166 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-config-path\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658653 kubelet[2719]: I1212 18:46:02.658184 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cni-path\") pod \"cilium-nxbnr\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " pod="kube-system/cilium-nxbnr" Dec 12 18:46:02.658784 kubelet[2719]: I1212 18:46:02.658199 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1687a6da-d0cd-482a-a77c-787657b75ce5-lib-modules\") pod \"kube-proxy-9xbl9\" (UID: \"1687a6da-d0cd-482a-a77c-787657b75ce5\") " pod="kube-system/kube-proxy-9xbl9" Dec 12 18:46:02.784997 kubelet[2719]: E1212 18:46:02.784903 2719 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 12 18:46:02.785396 kubelet[2719]: E1212 18:46:02.785174 2719 projected.go:194] Error preparing data for projected volume kube-api-access-72lh7 for pod kube-system/kube-proxy-9xbl9: configmap "kube-root-ca.crt" not found Dec 12 18:46:02.785396 kubelet[2719]: E1212 18:46:02.785266 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1687a6da-d0cd-482a-a77c-787657b75ce5-kube-api-access-72lh7 podName:1687a6da-d0cd-482a-a77c-787657b75ce5 nodeName:}" failed. No retries permitted until 2025-12-12 18:46:03.285243755 +0000 UTC m=+7.067782348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-72lh7" (UniqueName: "kubernetes.io/projected/1687a6da-d0cd-482a-a77c-787657b75ce5-kube-api-access-72lh7") pod "kube-proxy-9xbl9" (UID: "1687a6da-d0cd-482a-a77c-787657b75ce5") : configmap "kube-root-ca.crt" not found Dec 12 18:46:02.786568 kubelet[2719]: E1212 18:46:02.786495 2719 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 12 18:46:02.786568 kubelet[2719]: E1212 18:46:02.786516 2719 projected.go:194] Error preparing data for projected volume kube-api-access-fkczl for pod kube-system/cilium-nxbnr: configmap "kube-root-ca.crt" not found Dec 12 18:46:02.786568 kubelet[2719]: E1212 18:46:02.786548 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-kube-api-access-fkczl podName:b5d663a3-5f91-4d81-a2b6-8af2427171bb nodeName:}" failed. No retries permitted until 2025-12-12 18:46:03.286535025 +0000 UTC m=+7.069073618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fkczl" (UniqueName: "kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-kube-api-access-fkczl") pod "cilium-nxbnr" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb") : configmap "kube-root-ca.crt" not found Dec 12 18:46:03.229456 kubelet[2719]: E1212 18:46:03.229275 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:03.388345 kubelet[2719]: E1212 18:46:03.387835 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:03.479409 kubelet[2719]: E1212 18:46:03.479361 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:03.481010 containerd[1536]: time="2025-12-12T18:46:03.480757835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xbl9,Uid:1687a6da-d0cd-482a-a77c-787657b75ce5,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:03.496894 containerd[1536]: time="2025-12-12T18:46:03.496786055Z" level=info msg="connecting to shim 61d8c82a51d54f1aed1d45ea1602b4aca42a6427b8b7c6a0d9c551375503a3ba" address="unix:///run/containerd/s/b57a6e4c672a79a9ad9f11f9f9e141993f5f48678683cc6dd53773eb5e55221a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:03.502216 kubelet[2719]: E1212 18:46:03.502169 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:03.503928 containerd[1536]: time="2025-12-12T18:46:03.503858901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxbnr,Uid:b5d663a3-5f91-4d81-a2b6-8af2427171bb,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:03.530320 systemd[1]: Started cri-containerd-61d8c82a51d54f1aed1d45ea1602b4aca42a6427b8b7c6a0d9c551375503a3ba.scope - libcontainer container 61d8c82a51d54f1aed1d45ea1602b4aca42a6427b8b7c6a0d9c551375503a3ba. Dec 12 18:46:03.538173 containerd[1536]: time="2025-12-12T18:46:03.538082279Z" level=info msg="connecting to shim e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83" address="unix:///run/containerd/s/1ee69e97bc28e9aa5710d58e470551755609051e7f62a3f54442d22a951c0081" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:03.590995 systemd[1]: Created slice kubepods-besteffort-pod14fb9d37_3fc4_493e_a286_c30451214f46.slice - libcontainer container kubepods-besteffort-pod14fb9d37_3fc4_493e_a286_c30451214f46.slice. Dec 12 18:46:03.608436 systemd[1]: Started cri-containerd-e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83.scope - libcontainer container e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83. Dec 12 18:46:03.646009 containerd[1536]: time="2025-12-12T18:46:03.645929720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xbl9,Uid:1687a6da-d0cd-482a-a77c-787657b75ce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"61d8c82a51d54f1aed1d45ea1602b4aca42a6427b8b7c6a0d9c551375503a3ba\"" Dec 12 18:46:03.648041 kubelet[2719]: E1212 18:46:03.647396 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:03.653051 containerd[1536]: time="2025-12-12T18:46:03.652980626Z" level=info msg="CreateContainer within sandbox \"61d8c82a51d54f1aed1d45ea1602b4aca42a6427b8b7c6a0d9c551375503a3ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:46:03.662958 containerd[1536]: time="2025-12-12T18:46:03.662926625Z" level=info msg="Container 4f18678902005f49c57b582aa079b3dee927a8ba4c1486ae2554725105573dca: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:03.666039 kubelet[2719]: I1212 18:46:03.666019 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14fb9d37-3fc4-493e-a286-c30451214f46-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-thjrd\" (UID: \"14fb9d37-3fc4-493e-a286-c30451214f46\") " pod="kube-system/cilium-operator-6c4d7847fc-thjrd" Dec 12 18:46:03.666340 kubelet[2719]: I1212 18:46:03.666323 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7cf\" (UniqueName: \"kubernetes.io/projected/14fb9d37-3fc4-493e-a286-c30451214f46-kube-api-access-ql7cf\") pod \"cilium-operator-6c4d7847fc-thjrd\" (UID: \"14fb9d37-3fc4-493e-a286-c30451214f46\") " pod="kube-system/cilium-operator-6c4d7847fc-thjrd" Dec 12 18:46:03.668922 containerd[1536]: time="2025-12-12T18:46:03.668898731Z" level=info msg="CreateContainer within sandbox \"61d8c82a51d54f1aed1d45ea1602b4aca42a6427b8b7c6a0d9c551375503a3ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f18678902005f49c57b582aa079b3dee927a8ba4c1486ae2554725105573dca\"" Dec 12 18:46:03.669273 containerd[1536]: time="2025-12-12T18:46:03.669207773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxbnr,Uid:b5d663a3-5f91-4d81-a2b6-8af2427171bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\"" Dec 12 18:46:03.670485 containerd[1536]: time="2025-12-12T18:46:03.670454818Z" level=info msg="StartContainer for \"4f18678902005f49c57b582aa079b3dee927a8ba4c1486ae2554725105573dca\"" Dec 12 18:46:03.672070 kubelet[2719]: E1212 18:46:03.671944 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:03.672950 containerd[1536]: time="2025-12-12T18:46:03.672905186Z" level=info msg="connecting to shim 4f18678902005f49c57b582aa079b3dee927a8ba4c1486ae2554725105573dca" address="unix:///run/containerd/s/b57a6e4c672a79a9ad9f11f9f9e141993f5f48678683cc6dd53773eb5e55221a" protocol=ttrpc version=3 Dec 12 18:46:03.675330 containerd[1536]: time="2025-12-12T18:46:03.675255411Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 12 18:46:03.700255 systemd[1]: Started cri-containerd-4f18678902005f49c57b582aa079b3dee927a8ba4c1486ae2554725105573dca.scope - libcontainer container 4f18678902005f49c57b582aa079b3dee927a8ba4c1486ae2554725105573dca. Dec 12 18:46:03.809652 containerd[1536]: time="2025-12-12T18:46:03.809066432Z" level=info msg="StartContainer for \"4f18678902005f49c57b582aa079b3dee927a8ba4c1486ae2554725105573dca\" returns successfully" Dec 12 18:46:03.897627 kubelet[2719]: E1212 18:46:03.897546 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:03.899700 containerd[1536]: time="2025-12-12T18:46:03.899368639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-thjrd,Uid:14fb9d37-3fc4-493e-a286-c30451214f46,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:03.919919 containerd[1536]: time="2025-12-12T18:46:03.919830449Z" level=info msg="connecting to shim 9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9" address="unix:///run/containerd/s/fba27fb4dbe21a58234d043ba4c4fd6273359eda94acf38ac69d8ec8d7629b8f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:03.960287 systemd[1]: Started cri-containerd-9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9.scope - libcontainer container 9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9. Dec 12 18:46:04.027097 containerd[1536]: time="2025-12-12T18:46:04.027019105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-thjrd,Uid:14fb9d37-3fc4-493e-a286-c30451214f46,Namespace:kube-system,Attempt:0,} returns sandbox id \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\"" Dec 12 18:46:04.028306 kubelet[2719]: E1212 18:46:04.028247 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:04.394463 kubelet[2719]: E1212 18:46:04.394433 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:04.396280 kubelet[2719]: E1212 18:46:04.396263 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:06.229756 kubelet[2719]: E1212 18:46:06.229705 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:06.250368 kubelet[2719]: I1212 18:46:06.250292 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9xbl9" podStartSLOduration=4.250275402 podStartE2EDuration="4.250275402s" podCreationTimestamp="2025-12-12 18:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:46:04.40748558 +0000 UTC m=+8.190024173" watchObservedRunningTime="2025-12-12 18:46:06.250275402 +0000 UTC m=+10.032813995" Dec 12 18:46:06.408044 kubelet[2719]: E1212 18:46:06.407990 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:07.366771 kubelet[2719]: E1212 18:46:07.366642 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:08.283919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017431341.mount: Deactivated successfully. Dec 12 18:46:10.033902 containerd[1536]: time="2025-12-12T18:46:10.033841319Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:10.034877 containerd[1536]: time="2025-12-12T18:46:10.034610008Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Dec 12 18:46:10.036145 containerd[1536]: time="2025-12-12T18:46:10.035421948Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:10.036997 containerd[1536]: time="2025-12-12T18:46:10.036957966Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.361624822s" Dec 12 18:46:10.037079 containerd[1536]: time="2025-12-12T18:46:10.037063518Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 12 18:46:10.039060 containerd[1536]: time="2025-12-12T18:46:10.039037627Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 12 18:46:10.041842 containerd[1536]: time="2025-12-12T18:46:10.041765114Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 18:46:10.051039 containerd[1536]: time="2025-12-12T18:46:10.050437057Z" level=info msg="Container 99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:10.059660 containerd[1536]: time="2025-12-12T18:46:10.059619963Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988\"" Dec 12 18:46:10.061470 containerd[1536]: time="2025-12-12T18:46:10.061385936Z" level=info msg="StartContainer for \"99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988\"" Dec 12 18:46:10.063226 containerd[1536]: time="2025-12-12T18:46:10.063151079Z" level=info msg="connecting to shim 99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988" address="unix:///run/containerd/s/1ee69e97bc28e9aa5710d58e470551755609051e7f62a3f54442d22a951c0081" protocol=ttrpc version=3 Dec 12 18:46:10.093255 systemd[1]: Started cri-containerd-99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988.scope - libcontainer container 99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988. Dec 12 18:46:10.140258 containerd[1536]: time="2025-12-12T18:46:10.140210943Z" level=info msg="StartContainer for \"99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988\" returns successfully" Dec 12 18:46:10.155174 systemd[1]: cri-containerd-99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988.scope: Deactivated successfully. Dec 12 18:46:10.157772 containerd[1536]: time="2025-12-12T18:46:10.157637451Z" level=info msg="received container exit event container_id:\"99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988\" id:\"99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988\" pid:3132 exited_at:{seconds:1765565170 nanos:157159370}" Dec 12 18:46:10.204880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988-rootfs.mount: Deactivated successfully. Dec 12 18:46:10.417479 kubelet[2719]: E1212 18:46:10.416338 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:10.422410 containerd[1536]: time="2025-12-12T18:46:10.422269234Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 18:46:10.434855 containerd[1536]: time="2025-12-12T18:46:10.434787582Z" level=info msg="Container 27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:10.441182 containerd[1536]: time="2025-12-12T18:46:10.441092407Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470\"" Dec 12 18:46:10.442300 containerd[1536]: time="2025-12-12T18:46:10.442275696Z" level=info msg="StartContainer for \"27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470\"" Dec 12 18:46:10.443574 containerd[1536]: time="2025-12-12T18:46:10.443545407Z" level=info msg="connecting to shim 27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470" address="unix:///run/containerd/s/1ee69e97bc28e9aa5710d58e470551755609051e7f62a3f54442d22a951c0081" protocol=ttrpc version=3 Dec 12 18:46:10.466248 systemd[1]: Started cri-containerd-27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470.scope - libcontainer container 27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470. Dec 12 18:46:10.500238 containerd[1536]: time="2025-12-12T18:46:10.500195109Z" level=info msg="StartContainer for \"27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470\" returns successfully" Dec 12 18:46:10.522074 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:46:10.522316 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:46:10.523296 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:46:10.526950 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:46:10.531927 systemd[1]: cri-containerd-27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470.scope: Deactivated successfully. Dec 12 18:46:10.538661 containerd[1536]: time="2025-12-12T18:46:10.538604493Z" level=info msg="received container exit event container_id:\"27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470\" id:\"27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470\" pid:3177 exited_at:{seconds:1765565170 nanos:534255676}" Dec 12 18:46:10.556469 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:46:11.354205 containerd[1536]: time="2025-12-12T18:46:11.354127075Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:11.355477 containerd[1536]: time="2025-12-12T18:46:11.355300362Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Dec 12 18:46:11.356214 containerd[1536]: time="2025-12-12T18:46:11.356178752Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:46:11.357638 containerd[1536]: time="2025-12-12T18:46:11.357603006Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.318390004s" Dec 12 18:46:11.357721 containerd[1536]: time="2025-12-12T18:46:11.357705918Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 12 18:46:11.359681 containerd[1536]: time="2025-12-12T18:46:11.359620553Z" level=info msg="CreateContainer within sandbox \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 12 18:46:11.368127 containerd[1536]: time="2025-12-12T18:46:11.368035539Z" level=info msg="Container 1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:11.375375 containerd[1536]: time="2025-12-12T18:46:11.375344359Z" level=info msg="CreateContainer within sandbox \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\"" Dec 12 18:46:11.376413 containerd[1536]: time="2025-12-12T18:46:11.376052585Z" level=info msg="StartContainer for \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\"" Dec 12 18:46:11.378061 containerd[1536]: time="2025-12-12T18:46:11.378001961Z" level=info msg="connecting to shim 1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc" address="unix:///run/containerd/s/fba27fb4dbe21a58234d043ba4c4fd6273359eda94acf38ac69d8ec8d7629b8f" protocol=ttrpc version=3 Dec 12 18:46:11.403276 systemd[1]: Started cri-containerd-1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc.scope - libcontainer container 1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc. Dec 12 18:46:11.433794 kubelet[2719]: E1212 18:46:11.433552 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:11.443119 containerd[1536]: time="2025-12-12T18:46:11.442974855Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 18:46:11.469591 containerd[1536]: time="2025-12-12T18:46:11.469318579Z" level=info msg="Container 79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:11.478091 containerd[1536]: time="2025-12-12T18:46:11.478049652Z" level=info msg="StartContainer for \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\" returns successfully" Dec 12 18:46:11.480865 containerd[1536]: time="2025-12-12T18:46:11.480819347Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1\"" Dec 12 18:46:11.481321 containerd[1536]: time="2025-12-12T18:46:11.481288998Z" level=info msg="StartContainer for \"79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1\"" Dec 12 18:46:11.486141 containerd[1536]: time="2025-12-12T18:46:11.486033598Z" level=info msg="connecting to shim 79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1" address="unix:///run/containerd/s/1ee69e97bc28e9aa5710d58e470551755609051e7f62a3f54442d22a951c0081" protocol=ttrpc version=3 Dec 12 18:46:11.511787 systemd[1]: Started cri-containerd-79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1.scope - libcontainer container 79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1. Dec 12 18:46:11.611478 containerd[1536]: time="2025-12-12T18:46:11.611354639Z" level=info msg="StartContainer for \"79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1\" returns successfully" Dec 12 18:46:11.618244 systemd[1]: cri-containerd-79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1.scope: Deactivated successfully. Dec 12 18:46:11.623923 containerd[1536]: time="2025-12-12T18:46:11.623801639Z" level=info msg="received container exit event container_id:\"79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1\" id:\"79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1\" pid:3273 exited_at:{seconds:1765565171 nanos:623455511}" Dec 12 18:46:12.055479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3599002811.mount: Deactivated successfully. Dec 12 18:46:12.440313 kubelet[2719]: E1212 18:46:12.439868 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:12.444131 kubelet[2719]: E1212 18:46:12.442643 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:12.448550 containerd[1536]: time="2025-12-12T18:46:12.447305459Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 18:46:12.461577 containerd[1536]: time="2025-12-12T18:46:12.460716855Z" level=info msg="Container 687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:12.471129 containerd[1536]: time="2025-12-12T18:46:12.470935131Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4\"" Dec 12 18:46:12.472228 containerd[1536]: time="2025-12-12T18:46:12.472184679Z" level=info msg="StartContainer for \"687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4\"" Dec 12 18:46:12.473448 containerd[1536]: time="2025-12-12T18:46:12.473409976Z" level=info msg="connecting to shim 687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4" address="unix:///run/containerd/s/1ee69e97bc28e9aa5710d58e470551755609051e7f62a3f54442d22a951c0081" protocol=ttrpc version=3 Dec 12 18:46:12.504475 systemd[1]: Started cri-containerd-687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4.scope - libcontainer container 687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4. Dec 12 18:46:12.552784 systemd[1]: cri-containerd-687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4.scope: Deactivated successfully. Dec 12 18:46:12.556663 containerd[1536]: time="2025-12-12T18:46:12.556551664Z" level=info msg="received container exit event container_id:\"687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4\" id:\"687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4\" pid:3314 exited_at:{seconds:1765565172 nanos:555869749}" Dec 12 18:46:12.556943 containerd[1536]: time="2025-12-12T18:46:12.556914282Z" level=info msg="StartContainer for \"687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4\" returns successfully" Dec 12 18:46:12.584026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4-rootfs.mount: Deactivated successfully. Dec 12 18:46:13.450072 kubelet[2719]: E1212 18:46:13.450016 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:13.451479 kubelet[2719]: E1212 18:46:13.450628 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:13.453681 containerd[1536]: time="2025-12-12T18:46:13.453570385Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 18:46:13.473951 containerd[1536]: time="2025-12-12T18:46:13.473905722Z" level=info msg="Container 524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:13.482094 kubelet[2719]: I1212 18:46:13.482046 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-thjrd" podStartSLOduration=3.153105199 podStartE2EDuration="10.482027763s" podCreationTimestamp="2025-12-12 18:46:03 +0000 UTC" firstStartedPulling="2025-12-12 18:46:04.029532861 +0000 UTC m=+7.812071454" lastFinishedPulling="2025-12-12 18:46:11.358455425 +0000 UTC m=+15.140994018" observedRunningTime="2025-12-12 18:46:12.471844301 +0000 UTC m=+16.254382894" watchObservedRunningTime="2025-12-12 18:46:13.482027763 +0000 UTC m=+17.264566356" Dec 12 18:46:13.482598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450599270.mount: Deactivated successfully. Dec 12 18:46:13.485540 containerd[1536]: time="2025-12-12T18:46:13.485494746Z" level=info msg="CreateContainer within sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\"" Dec 12 18:46:13.486228 containerd[1536]: time="2025-12-12T18:46:13.486160910Z" level=info msg="StartContainer for \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\"" Dec 12 18:46:13.487207 containerd[1536]: time="2025-12-12T18:46:13.487184841Z" level=info msg="connecting to shim 524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651" address="unix:///run/containerd/s/1ee69e97bc28e9aa5710d58e470551755609051e7f62a3f54442d22a951c0081" protocol=ttrpc version=3 Dec 12 18:46:13.520258 systemd[1]: Started cri-containerd-524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651.scope - libcontainer container 524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651. Dec 12 18:46:13.571064 containerd[1536]: time="2025-12-12T18:46:13.570980641Z" level=info msg="StartContainer for \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\" returns successfully" Dec 12 18:46:13.696734 kubelet[2719]: I1212 18:46:13.696690 2719 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:46:13.709081 update_engine[1529]: I20251212 18:46:13.708181 1529 update_attempter.cc:509] Updating boot flags... Dec 12 18:46:13.760869 systemd[1]: Created slice kubepods-burstable-pod54e6d8ef_6291_4da6_957a_c14bd82ffa0d.slice - libcontainer container kubepods-burstable-pod54e6d8ef_6291_4da6_957a_c14bd82ffa0d.slice. Dec 12 18:46:13.784080 systemd[1]: Created slice kubepods-burstable-pod1f953c43_2b48_4560_b841_b0f4b9f95480.slice - libcontainer container kubepods-burstable-pod1f953c43_2b48_4560_b841_b0f4b9f95480.slice. Dec 12 18:46:13.840911 kubelet[2719]: I1212 18:46:13.840875 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54e6d8ef-6291-4da6-957a-c14bd82ffa0d-config-volume\") pod \"coredns-668d6bf9bc-kcwcd\" (UID: \"54e6d8ef-6291-4da6-957a-c14bd82ffa0d\") " pod="kube-system/coredns-668d6bf9bc-kcwcd" Dec 12 18:46:13.842278 kubelet[2719]: I1212 18:46:13.841180 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f953c43-2b48-4560-b841-b0f4b9f95480-config-volume\") pod \"coredns-668d6bf9bc-pf74x\" (UID: \"1f953c43-2b48-4560-b841-b0f4b9f95480\") " pod="kube-system/coredns-668d6bf9bc-pf74x" Dec 12 18:46:13.842278 kubelet[2719]: I1212 18:46:13.841205 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnzf2\" (UniqueName: \"kubernetes.io/projected/54e6d8ef-6291-4da6-957a-c14bd82ffa0d-kube-api-access-mnzf2\") pod \"coredns-668d6bf9bc-kcwcd\" (UID: \"54e6d8ef-6291-4da6-957a-c14bd82ffa0d\") " pod="kube-system/coredns-668d6bf9bc-kcwcd" Dec 12 18:46:13.842278 kubelet[2719]: I1212 18:46:13.842218 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqzvw\" (UniqueName: \"kubernetes.io/projected/1f953c43-2b48-4560-b841-b0f4b9f95480-kube-api-access-tqzvw\") pod \"coredns-668d6bf9bc-pf74x\" (UID: \"1f953c43-2b48-4560-b841-b0f4b9f95480\") " pod="kube-system/coredns-668d6bf9bc-pf74x" Dec 12 18:46:14.077533 kubelet[2719]: E1212 18:46:14.072383 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:14.077675 containerd[1536]: time="2025-12-12T18:46:14.074315223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kcwcd,Uid:54e6d8ef-6291-4da6-957a-c14bd82ffa0d,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:14.092587 kubelet[2719]: E1212 18:46:14.091636 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:14.095880 containerd[1536]: time="2025-12-12T18:46:14.095852753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pf74x,Uid:1f953c43-2b48-4560-b841-b0f4b9f95480,Namespace:kube-system,Attempt:0,}" Dec 12 18:46:14.461156 kubelet[2719]: E1212 18:46:14.460990 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:15.462692 kubelet[2719]: E1212 18:46:15.462639 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:16.230794 systemd-networkd[1443]: cilium_host: Link UP Dec 12 18:46:16.231003 systemd-networkd[1443]: cilium_net: Link UP Dec 12 18:46:16.232277 systemd-networkd[1443]: cilium_net: Gained carrier Dec 12 18:46:16.232480 systemd-networkd[1443]: cilium_host: Gained carrier Dec 12 18:46:16.375347 systemd-networkd[1443]: cilium_vxlan: Link UP Dec 12 18:46:16.375358 systemd-networkd[1443]: cilium_vxlan: Gained carrier Dec 12 18:46:16.469502 kubelet[2719]: E1212 18:46:16.469448 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:16.617349 kernel: NET: Registered PF_ALG protocol family Dec 12 18:46:17.019264 systemd-networkd[1443]: cilium_net: Gained IPv6LL Dec 12 18:46:17.083249 systemd-networkd[1443]: cilium_host: Gained IPv6LL Dec 12 18:46:17.395183 systemd-networkd[1443]: lxc_health: Link UP Dec 12 18:46:17.395547 systemd-networkd[1443]: lxc_health: Gained carrier Dec 12 18:46:17.508533 kubelet[2719]: E1212 18:46:17.508484 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:17.527128 kubelet[2719]: I1212 18:46:17.526754 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nxbnr" podStartSLOduration=9.161645443 podStartE2EDuration="15.526739199s" podCreationTimestamp="2025-12-12 18:46:02 +0000 UTC" firstStartedPulling="2025-12-12 18:46:03.673692845 +0000 UTC m=+7.456243638" lastFinishedPulling="2025-12-12 18:46:10.038798801 +0000 UTC m=+13.821337394" observedRunningTime="2025-12-12 18:46:14.487332583 +0000 UTC m=+18.269871176" watchObservedRunningTime="2025-12-12 18:46:17.526739199 +0000 UTC m=+21.309277792" Dec 12 18:46:17.667794 systemd-networkd[1443]: lxccf750bd19c36: Link UP Dec 12 18:46:17.675260 kernel: eth0: renamed from tmp95b55 Dec 12 18:46:17.677768 systemd-networkd[1443]: lxccf750bd19c36: Gained carrier Dec 12 18:46:17.716205 kernel: eth0: renamed from tmpadf1e Dec 12 18:46:17.717328 systemd-networkd[1443]: lxc72d3c4cb4e7d: Link UP Dec 12 18:46:17.719763 systemd-networkd[1443]: lxc72d3c4cb4e7d: Gained carrier Dec 12 18:46:18.301234 systemd-networkd[1443]: cilium_vxlan: Gained IPv6LL Dec 12 18:46:18.474609 kubelet[2719]: E1212 18:46:18.474570 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:18.875310 systemd-networkd[1443]: lxc72d3c4cb4e7d: Gained IPv6LL Dec 12 18:46:19.131315 systemd-networkd[1443]: lxc_health: Gained IPv6LL Dec 12 18:46:19.480145 kubelet[2719]: E1212 18:46:19.478630 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:19.515368 systemd-networkd[1443]: lxccf750bd19c36: Gained IPv6LL Dec 12 18:46:21.146516 containerd[1536]: time="2025-12-12T18:46:21.146447936Z" level=info msg="connecting to shim 95b552ab46c33499d51c3b7a81de5c214161c2016f6f2573b61e13c5c322cfcd" address="unix:///run/containerd/s/5e8610402ccf696872d814f4702c657f15392594d4838df85c358d7349ae428d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:21.165130 containerd[1536]: time="2025-12-12T18:46:21.164250130Z" level=info msg="connecting to shim adf1e99a9ea64dd02bea096fb7c34025a3558368073341534e58d6ebc630d59c" address="unix:///run/containerd/s/c143d6eed5e584905b9534fe423f4ed3d9e4ae27d8513644184aaa164808760e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:46:21.200802 systemd[1]: Started cri-containerd-adf1e99a9ea64dd02bea096fb7c34025a3558368073341534e58d6ebc630d59c.scope - libcontainer container adf1e99a9ea64dd02bea096fb7c34025a3558368073341534e58d6ebc630d59c. Dec 12 18:46:21.208459 systemd[1]: Started cri-containerd-95b552ab46c33499d51c3b7a81de5c214161c2016f6f2573b61e13c5c322cfcd.scope - libcontainer container 95b552ab46c33499d51c3b7a81de5c214161c2016f6f2573b61e13c5c322cfcd. Dec 12 18:46:21.292745 containerd[1536]: time="2025-12-12T18:46:21.292698000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pf74x,Uid:1f953c43-2b48-4560-b841-b0f4b9f95480,Namespace:kube-system,Attempt:0,} returns sandbox id \"adf1e99a9ea64dd02bea096fb7c34025a3558368073341534e58d6ebc630d59c\"" Dec 12 18:46:21.294130 kubelet[2719]: E1212 18:46:21.294057 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:21.297844 containerd[1536]: time="2025-12-12T18:46:21.297707611Z" level=info msg="CreateContainer within sandbox \"adf1e99a9ea64dd02bea096fb7c34025a3558368073341534e58d6ebc630d59c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:46:21.317363 containerd[1536]: time="2025-12-12T18:46:21.317312070Z" level=info msg="Container bc33c4f25b81c1bb2428e359f8d562baebdf4a923d502141d83dce7e95177f77: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:21.317711 containerd[1536]: time="2025-12-12T18:46:21.317598704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kcwcd,Uid:54e6d8ef-6291-4da6-957a-c14bd82ffa0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"95b552ab46c33499d51c3b7a81de5c214161c2016f6f2573b61e13c5c322cfcd\"" Dec 12 18:46:21.320458 kubelet[2719]: E1212 18:46:21.319792 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:21.326059 containerd[1536]: time="2025-12-12T18:46:21.325778301Z" level=info msg="CreateContainer within sandbox \"95b552ab46c33499d51c3b7a81de5c214161c2016f6f2573b61e13c5c322cfcd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:46:21.327128 containerd[1536]: time="2025-12-12T18:46:21.326467451Z" level=info msg="CreateContainer within sandbox \"adf1e99a9ea64dd02bea096fb7c34025a3558368073341534e58d6ebc630d59c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc33c4f25b81c1bb2428e359f8d562baebdf4a923d502141d83dce7e95177f77\"" Dec 12 18:46:21.328259 containerd[1536]: time="2025-12-12T18:46:21.328237286Z" level=info msg="StartContainer for \"bc33c4f25b81c1bb2428e359f8d562baebdf4a923d502141d83dce7e95177f77\"" Dec 12 18:46:21.332583 containerd[1536]: time="2025-12-12T18:46:21.332558577Z" level=info msg="connecting to shim bc33c4f25b81c1bb2428e359f8d562baebdf4a923d502141d83dce7e95177f77" address="unix:///run/containerd/s/c143d6eed5e584905b9534fe423f4ed3d9e4ae27d8513644184aaa164808760e" protocol=ttrpc version=3 Dec 12 18:46:21.339305 containerd[1536]: time="2025-12-12T18:46:21.339283933Z" level=info msg="Container fbbffaed020bf3537fcb01c7d9da80cdda9b6684f704c90baa4eb53334a5fcf1: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:46:21.362314 containerd[1536]: time="2025-12-12T18:46:21.361173875Z" level=info msg="CreateContainer within sandbox \"95b552ab46c33499d51c3b7a81de5c214161c2016f6f2573b61e13c5c322cfcd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbbffaed020bf3537fcb01c7d9da80cdda9b6684f704c90baa4eb53334a5fcf1\"" Dec 12 18:46:21.362482 containerd[1536]: time="2025-12-12T18:46:21.362416873Z" level=info msg="StartContainer for \"fbbffaed020bf3537fcb01c7d9da80cdda9b6684f704c90baa4eb53334a5fcf1\"" Dec 12 18:46:21.365314 containerd[1536]: time="2025-12-12T18:46:21.365278263Z" level=info msg="connecting to shim fbbffaed020bf3537fcb01c7d9da80cdda9b6684f704c90baa4eb53334a5fcf1" address="unix:///run/containerd/s/5e8610402ccf696872d814f4702c657f15392594d4838df85c358d7349ae428d" protocol=ttrpc version=3 Dec 12 18:46:21.366432 systemd[1]: Started cri-containerd-bc33c4f25b81c1bb2428e359f8d562baebdf4a923d502141d83dce7e95177f77.scope - libcontainer container bc33c4f25b81c1bb2428e359f8d562baebdf4a923d502141d83dce7e95177f77. Dec 12 18:46:21.407261 systemd[1]: Started cri-containerd-fbbffaed020bf3537fcb01c7d9da80cdda9b6684f704c90baa4eb53334a5fcf1.scope - libcontainer container fbbffaed020bf3537fcb01c7d9da80cdda9b6684f704c90baa4eb53334a5fcf1. Dec 12 18:46:21.468942 containerd[1536]: time="2025-12-12T18:46:21.468897680Z" level=info msg="StartContainer for \"bc33c4f25b81c1bb2428e359f8d562baebdf4a923d502141d83dce7e95177f77\" returns successfully" Dec 12 18:46:21.482850 containerd[1536]: time="2025-12-12T18:46:21.482768027Z" level=info msg="StartContainer for \"fbbffaed020bf3537fcb01c7d9da80cdda9b6684f704c90baa4eb53334a5fcf1\" returns successfully" Dec 12 18:46:21.490502 kubelet[2719]: E1212 18:46:21.490434 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:21.498132 kubelet[2719]: E1212 18:46:21.496731 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:21.505515 kubelet[2719]: I1212 18:46:21.505475 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kcwcd" podStartSLOduration=18.5054609 podStartE2EDuration="18.5054609s" podCreationTimestamp="2025-12-12 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:46:21.50332906 +0000 UTC m=+25.285867653" watchObservedRunningTime="2025-12-12 18:46:21.5054609 +0000 UTC m=+25.287999493" Dec 12 18:46:21.520422 kubelet[2719]: I1212 18:46:21.520386 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pf74x" podStartSLOduration=18.520254801 podStartE2EDuration="18.520254801s" podCreationTimestamp="2025-12-12 18:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:46:21.518654128 +0000 UTC m=+25.301192721" watchObservedRunningTime="2025-12-12 18:46:21.520254801 +0000 UTC m=+25.302793404" Dec 12 18:46:22.131879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2742462189.mount: Deactivated successfully. Dec 12 18:46:22.500384 kubelet[2719]: E1212 18:46:22.499777 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:23.502573 kubelet[2719]: E1212 18:46:23.502517 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:31.496611 kubelet[2719]: E1212 18:46:31.496563 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:46:31.518077 kubelet[2719]: E1212 18:46:31.518009 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:47:06.342993 kubelet[2719]: E1212 18:47:06.342933 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:47:15.340652 kubelet[2719]: E1212 18:47:15.340581 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:47:19.341057 kubelet[2719]: E1212 18:47:19.340962 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:47:24.342236 kubelet[2719]: E1212 18:47:24.340631 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:47:24.351622 kubelet[2719]: E1212 18:47:24.343398 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:47:42.341144 kubelet[2719]: E1212 18:47:42.340871 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:47:44.343426 kubelet[2719]: E1212 18:47:44.343392 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:47:59.340888 kubelet[2719]: E1212 18:47:59.340848 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:48:08.476994 systemd[1]: Started sshd@7-172.237.139.125:22-139.178.68.195:36284.service - OpenSSH per-connection server daemon (139.178.68.195:36284). Dec 12 18:48:08.827722 sshd[4049]: Accepted publickey for core from 139.178.68.195 port 36284 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:08.829423 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:08.836558 systemd-logind[1528]: New session 8 of user core. Dec 12 18:48:08.842279 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:48:09.163845 sshd[4052]: Connection closed by 139.178.68.195 port 36284 Dec 12 18:48:09.164609 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:09.171376 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:48:09.172386 systemd[1]: sshd@7-172.237.139.125:22-139.178.68.195:36284.service: Deactivated successfully. Dec 12 18:48:09.174739 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:48:09.176957 systemd-logind[1528]: Removed session 8. Dec 12 18:48:10.340794 kubelet[2719]: E1212 18:48:10.340695 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:48:14.236243 systemd[1]: Started sshd@8-172.237.139.125:22-139.178.68.195:54462.service - OpenSSH per-connection server daemon (139.178.68.195:54462). Dec 12 18:48:14.588894 sshd[4065]: Accepted publickey for core from 139.178.68.195 port 54462 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:14.590766 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:14.597984 systemd-logind[1528]: New session 9 of user core. Dec 12 18:48:14.602242 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:48:14.899511 sshd[4068]: Connection closed by 139.178.68.195 port 54462 Dec 12 18:48:14.900343 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:14.905553 systemd[1]: sshd@8-172.237.139.125:22-139.178.68.195:54462.service: Deactivated successfully. Dec 12 18:48:14.907821 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:48:14.909716 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:48:14.911235 systemd-logind[1528]: Removed session 9. Dec 12 18:48:19.964563 systemd[1]: Started sshd@9-172.237.139.125:22-139.178.68.195:54468.service - OpenSSH per-connection server daemon (139.178.68.195:54468). Dec 12 18:48:20.306259 sshd[4081]: Accepted publickey for core from 139.178.68.195 port 54468 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:20.307978 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:20.313980 systemd-logind[1528]: New session 10 of user core. Dec 12 18:48:20.319239 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:48:20.608280 sshd[4084]: Connection closed by 139.178.68.195 port 54468 Dec 12 18:48:20.609994 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:20.614024 systemd[1]: sshd@9-172.237.139.125:22-139.178.68.195:54468.service: Deactivated successfully. Dec 12 18:48:20.616188 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:48:20.617240 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:48:20.618742 systemd-logind[1528]: Removed session 10. Dec 12 18:48:20.667504 systemd[1]: Started sshd@10-172.237.139.125:22-139.178.68.195:59250.service - OpenSSH per-connection server daemon (139.178.68.195:59250). Dec 12 18:48:21.001927 sshd[4097]: Accepted publickey for core from 139.178.68.195 port 59250 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:21.003649 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:21.009254 systemd-logind[1528]: New session 11 of user core. Dec 12 18:48:21.016240 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:48:21.341526 sshd[4100]: Connection closed by 139.178.68.195 port 59250 Dec 12 18:48:21.342274 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:21.346894 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:48:21.348214 systemd[1]: sshd@10-172.237.139.125:22-139.178.68.195:59250.service: Deactivated successfully. Dec 12 18:48:21.350522 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:48:21.352809 systemd-logind[1528]: Removed session 11. Dec 12 18:48:21.407306 systemd[1]: Started sshd@11-172.237.139.125:22-139.178.68.195:59256.service - OpenSSH per-connection server daemon (139.178.68.195:59256). Dec 12 18:48:21.754010 sshd[4110]: Accepted publickey for core from 139.178.68.195 port 59256 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:21.756310 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:21.762995 systemd-logind[1528]: New session 12 of user core. Dec 12 18:48:21.766229 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:48:22.061466 sshd[4113]: Connection closed by 139.178.68.195 port 59256 Dec 12 18:48:22.062132 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:22.067391 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:48:22.068223 systemd[1]: sshd@11-172.237.139.125:22-139.178.68.195:59256.service: Deactivated successfully. Dec 12 18:48:22.070500 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:48:22.072427 systemd-logind[1528]: Removed session 12. Dec 12 18:48:27.136531 systemd[1]: Started sshd@12-172.237.139.125:22-139.178.68.195:59264.service - OpenSSH per-connection server daemon (139.178.68.195:59264). Dec 12 18:48:27.490825 sshd[4124]: Accepted publickey for core from 139.178.68.195 port 59264 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:27.492280 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:27.497254 systemd-logind[1528]: New session 13 of user core. Dec 12 18:48:27.504227 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:48:27.798826 sshd[4127]: Connection closed by 139.178.68.195 port 59264 Dec 12 18:48:27.799438 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:27.804272 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:48:27.805101 systemd[1]: sshd@12-172.237.139.125:22-139.178.68.195:59264.service: Deactivated successfully. Dec 12 18:48:27.807567 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:48:27.809052 systemd-logind[1528]: Removed session 13. Dec 12 18:48:30.340681 kubelet[2719]: E1212 18:48:30.340624 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:48:32.866295 systemd[1]: Started sshd@13-172.237.139.125:22-139.178.68.195:35200.service - OpenSSH per-connection server daemon (139.178.68.195:35200). Dec 12 18:48:33.221652 sshd[4139]: Accepted publickey for core from 139.178.68.195 port 35200 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:33.223281 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:33.231224 systemd-logind[1528]: New session 14 of user core. Dec 12 18:48:33.237251 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:48:33.531150 sshd[4142]: Connection closed by 139.178.68.195 port 35200 Dec 12 18:48:33.531808 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:33.538354 systemd[1]: sshd@13-172.237.139.125:22-139.178.68.195:35200.service: Deactivated successfully. Dec 12 18:48:33.540600 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:48:33.541692 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:48:33.542985 systemd-logind[1528]: Removed session 14. Dec 12 18:48:33.599305 systemd[1]: Started sshd@14-172.237.139.125:22-139.178.68.195:35204.service - OpenSSH per-connection server daemon (139.178.68.195:35204). Dec 12 18:48:33.945311 sshd[4154]: Accepted publickey for core from 139.178.68.195 port 35204 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:33.946652 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:33.951810 systemd-logind[1528]: New session 15 of user core. Dec 12 18:48:33.963263 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:48:34.274254 sshd[4157]: Connection closed by 139.178.68.195 port 35204 Dec 12 18:48:34.274818 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:34.279280 systemd[1]: sshd@14-172.237.139.125:22-139.178.68.195:35204.service: Deactivated successfully. Dec 12 18:48:34.281779 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:48:34.283063 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:48:34.285328 systemd-logind[1528]: Removed session 15. Dec 12 18:48:34.335083 systemd[1]: Started sshd@15-172.237.139.125:22-139.178.68.195:35214.service - OpenSSH per-connection server daemon (139.178.68.195:35214). Dec 12 18:48:34.666151 sshd[4169]: Accepted publickey for core from 139.178.68.195 port 35214 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:34.667785 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:34.672858 systemd-logind[1528]: New session 16 of user core. Dec 12 18:48:34.684267 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:48:35.340555 kubelet[2719]: E1212 18:48:35.340244 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:48:35.414471 sshd[4172]: Connection closed by 139.178.68.195 port 35214 Dec 12 18:48:35.416210 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:35.421415 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:48:35.422459 systemd[1]: sshd@15-172.237.139.125:22-139.178.68.195:35214.service: Deactivated successfully. Dec 12 18:48:35.424753 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:48:35.426907 systemd-logind[1528]: Removed session 16. Dec 12 18:48:35.476131 systemd[1]: Started sshd@16-172.237.139.125:22-139.178.68.195:35222.service - OpenSSH per-connection server daemon (139.178.68.195:35222). Dec 12 18:48:35.822665 sshd[4189]: Accepted publickey for core from 139.178.68.195 port 35222 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:35.824037 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:35.830464 systemd-logind[1528]: New session 17 of user core. Dec 12 18:48:35.836212 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:48:36.245730 sshd[4193]: Connection closed by 139.178.68.195 port 35222 Dec 12 18:48:36.247580 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:36.252870 systemd[1]: sshd@16-172.237.139.125:22-139.178.68.195:35222.service: Deactivated successfully. Dec 12 18:48:36.255859 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:48:36.257207 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:48:36.258868 systemd-logind[1528]: Removed session 17. Dec 12 18:48:36.307380 systemd[1]: Started sshd@17-172.237.139.125:22-139.178.68.195:35234.service - OpenSSH per-connection server daemon (139.178.68.195:35234). Dec 12 18:48:36.663441 sshd[4203]: Accepted publickey for core from 139.178.68.195 port 35234 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:36.664959 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:36.670173 systemd-logind[1528]: New session 18 of user core. Dec 12 18:48:36.676251 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:48:36.973037 sshd[4206]: Connection closed by 139.178.68.195 port 35234 Dec 12 18:48:36.974057 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:36.978101 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:48:36.978760 systemd[1]: sshd@17-172.237.139.125:22-139.178.68.195:35234.service: Deactivated successfully. Dec 12 18:48:36.981173 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:48:36.982873 systemd-logind[1528]: Removed session 18. Dec 12 18:48:42.038674 systemd[1]: Started sshd@18-172.237.139.125:22-139.178.68.195:45058.service - OpenSSH per-connection server daemon (139.178.68.195:45058). Dec 12 18:48:42.379960 sshd[4219]: Accepted publickey for core from 139.178.68.195 port 45058 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:42.381931 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:42.389223 systemd-logind[1528]: New session 19 of user core. Dec 12 18:48:42.393417 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:48:42.675797 sshd[4222]: Connection closed by 139.178.68.195 port 45058 Dec 12 18:48:42.677283 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:42.681646 systemd[1]: sshd@18-172.237.139.125:22-139.178.68.195:45058.service: Deactivated successfully. Dec 12 18:48:42.684042 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:48:42.685054 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:48:42.687290 systemd-logind[1528]: Removed session 19. Dec 12 18:48:46.341753 kubelet[2719]: E1212 18:48:46.340912 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:48:47.743700 systemd[1]: Started sshd@19-172.237.139.125:22-139.178.68.195:45062.service - OpenSSH per-connection server daemon (139.178.68.195:45062). Dec 12 18:48:48.098825 sshd[4234]: Accepted publickey for core from 139.178.68.195 port 45062 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:48.100543 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:48.105443 systemd-logind[1528]: New session 20 of user core. Dec 12 18:48:48.112246 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:48:48.403204 sshd[4237]: Connection closed by 139.178.68.195 port 45062 Dec 12 18:48:48.404103 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:48.409247 systemd[1]: sshd@19-172.237.139.125:22-139.178.68.195:45062.service: Deactivated successfully. Dec 12 18:48:48.412398 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:48:48.414019 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:48:48.415262 systemd-logind[1528]: Removed session 20. Dec 12 18:48:51.341087 kubelet[2719]: E1212 18:48:51.341043 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:48:53.466381 systemd[1]: Started sshd@20-172.237.139.125:22-139.178.68.195:50888.service - OpenSSH per-connection server daemon (139.178.68.195:50888). Dec 12 18:48:53.828227 sshd[4249]: Accepted publickey for core from 139.178.68.195 port 50888 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:53.830502 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:53.837403 systemd-logind[1528]: New session 21 of user core. Dec 12 18:48:53.843242 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:48:54.140651 sshd[4252]: Connection closed by 139.178.68.195 port 50888 Dec 12 18:48:54.141436 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:54.146388 systemd[1]: sshd@20-172.237.139.125:22-139.178.68.195:50888.service: Deactivated successfully. Dec 12 18:48:54.146752 systemd-logind[1528]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:48:54.152236 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:48:54.154315 systemd-logind[1528]: Removed session 21. Dec 12 18:48:54.202392 systemd[1]: Started sshd@21-172.237.139.125:22-139.178.68.195:50896.service - OpenSSH per-connection server daemon (139.178.68.195:50896). Dec 12 18:48:54.547502 sshd[4264]: Accepted publickey for core from 139.178.68.195 port 50896 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:54.549229 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:54.555495 systemd-logind[1528]: New session 22 of user core. Dec 12 18:48:54.562245 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:48:56.040543 containerd[1536]: time="2025-12-12T18:48:56.040411286Z" level=info msg="StopContainer for \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\" with timeout 30 (s)" Dec 12 18:48:56.041990 containerd[1536]: time="2025-12-12T18:48:56.041826884Z" level=info msg="Stop container \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\" with signal terminated" Dec 12 18:48:56.063209 systemd[1]: cri-containerd-1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc.scope: Deactivated successfully. Dec 12 18:48:56.069374 containerd[1536]: time="2025-12-12T18:48:56.069255744Z" level=info msg="received container exit event container_id:\"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\" id:\"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\" pid:3240 exited_at:{seconds:1765565336 nanos:68787475}" Dec 12 18:48:56.075625 containerd[1536]: time="2025-12-12T18:48:56.075602846Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:48:56.089541 containerd[1536]: time="2025-12-12T18:48:56.089430554Z" level=info msg="StopContainer for \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\" with timeout 2 (s)" Dec 12 18:48:56.089851 containerd[1536]: time="2025-12-12T18:48:56.089836151Z" level=info msg="Stop container \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\" with signal terminated" Dec 12 18:48:56.103729 systemd-networkd[1443]: lxc_health: Link DOWN Dec 12 18:48:56.103740 systemd-networkd[1443]: lxc_health: Lost carrier Dec 12 18:48:56.115054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc-rootfs.mount: Deactivated successfully. Dec 12 18:48:56.129015 systemd[1]: cri-containerd-524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651.scope: Deactivated successfully. Dec 12 18:48:56.129385 systemd[1]: cri-containerd-524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651.scope: Consumed 6.973s CPU time, 126.1M memory peak, 128K read from disk, 13.3M written to disk. Dec 12 18:48:56.133393 containerd[1536]: time="2025-12-12T18:48:56.132880663Z" level=info msg="received container exit event container_id:\"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\" id:\"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\" pid:3351 exited_at:{seconds:1765565336 nanos:132631068}" Dec 12 18:48:56.139237 containerd[1536]: time="2025-12-12T18:48:56.139208595Z" level=info msg="StopContainer for \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\" returns successfully" Dec 12 18:48:56.140498 containerd[1536]: time="2025-12-12T18:48:56.140471210Z" level=info msg="StopPodSandbox for \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\"" Dec 12 18:48:56.140925 containerd[1536]: time="2025-12-12T18:48:56.140889518Z" level=info msg="Container to stop \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:48:56.154452 systemd[1]: cri-containerd-9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9.scope: Deactivated successfully. Dec 12 18:48:56.158682 containerd[1536]: time="2025-12-12T18:48:56.158645251Z" level=info msg="received sandbox exit event container_id:\"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" id:\"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" exit_status:137 exited_at:{seconds:1765565336 nanos:158403816}" monitor_name=podsandbox Dec 12 18:48:56.176437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651-rootfs.mount: Deactivated successfully. Dec 12 18:48:56.186217 containerd[1536]: time="2025-12-12T18:48:56.186149542Z" level=info msg="StopContainer for \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\" returns successfully" Dec 12 18:48:56.189448 containerd[1536]: time="2025-12-12T18:48:56.189325844Z" level=info msg="StopPodSandbox for \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\"" Dec 12 18:48:56.189448 containerd[1536]: time="2025-12-12T18:48:56.189380475Z" level=info msg="Container to stop \"27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:48:56.189448 containerd[1536]: time="2025-12-12T18:48:56.189390725Z" level=info msg="Container to stop \"79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:48:56.189448 containerd[1536]: time="2025-12-12T18:48:56.189399025Z" level=info msg="Container to stop \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:48:56.189448 containerd[1536]: time="2025-12-12T18:48:56.189407095Z" level=info msg="Container to stop \"99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:48:56.189448 containerd[1536]: time="2025-12-12T18:48:56.189414345Z" level=info msg="Container to stop \"687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 12 18:48:56.201180 containerd[1536]: time="2025-12-12T18:48:56.200936678Z" level=info msg="shim disconnected" id=9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9 namespace=k8s.io Dec 12 18:48:56.201180 containerd[1536]: time="2025-12-12T18:48:56.201145902Z" level=warning msg="cleaning up after shim disconnected" id=9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9 namespace=k8s.io Dec 12 18:48:56.202073 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9-rootfs.mount: Deactivated successfully. Dec 12 18:48:56.203359 systemd[1]: cri-containerd-e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83.scope: Deactivated successfully. Dec 12 18:48:56.206275 containerd[1536]: time="2025-12-12T18:48:56.201157702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 18:48:56.213680 containerd[1536]: time="2025-12-12T18:48:56.213655094Z" level=info msg="received sandbox exit event container_id:\"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" id:\"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" exit_status:137 exited_at:{seconds:1765565336 nanos:213335998}" monitor_name=podsandbox Dec 12 18:48:56.218161 containerd[1536]: time="2025-12-12T18:48:56.217412016Z" level=info msg="received sandbox container exit event sandbox_id:\"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" exit_status:137 exited_at:{seconds:1765565336 nanos:158403816}" monitor_name=criService Dec 12 18:48:56.219806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9-shm.mount: Deactivated successfully. Dec 12 18:48:56.221611 containerd[1536]: time="2025-12-12T18:48:56.217569319Z" level=info msg="TearDown network for sandbox \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" successfully" Dec 12 18:48:56.221611 containerd[1536]: time="2025-12-12T18:48:56.220721270Z" level=info msg="StopPodSandbox for \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" returns successfully" Dec 12 18:48:56.250492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83-rootfs.mount: Deactivated successfully. Dec 12 18:48:56.254188 containerd[1536]: time="2025-12-12T18:48:56.254086505Z" level=info msg="shim disconnected" id=e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83 namespace=k8s.io Dec 12 18:48:56.254188 containerd[1536]: time="2025-12-12T18:48:56.254140206Z" level=warning msg="cleaning up after shim disconnected" id=e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83 namespace=k8s.io Dec 12 18:48:56.254374 containerd[1536]: time="2025-12-12T18:48:56.254213617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 12 18:48:56.268608 containerd[1536]: time="2025-12-12T18:48:56.268527394Z" level=info msg="received sandbox container exit event sandbox_id:\"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" exit_status:137 exited_at:{seconds:1765565336 nanos:213335998}" monitor_name=criService Dec 12 18:48:56.269213 containerd[1536]: time="2025-12-12T18:48:56.269033224Z" level=info msg="TearDown network for sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" successfully" Dec 12 18:48:56.269213 containerd[1536]: time="2025-12-12T18:48:56.269059894Z" level=info msg="StopPodSandbox for \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" returns successfully" Dec 12 18:48:56.326896 kubelet[2719]: I1212 18:48:56.326623 2719 scope.go:117] "RemoveContainer" containerID="99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988" Dec 12 18:48:56.328919 containerd[1536]: time="2025-12-12T18:48:56.328889220Z" level=info msg="RemoveContainer for \"99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988\"" Dec 12 18:48:56.333299 containerd[1536]: time="2025-12-12T18:48:56.333275225Z" level=info msg="RemoveContainer for \"99d2a3b57f75d910547781ef61888267ebc088fe8d9f27981b57cb8862a74988\" returns successfully" Dec 12 18:48:56.333566 kubelet[2719]: I1212 18:48:56.333520 2719 scope.go:117] "RemoveContainer" containerID="1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc" Dec 12 18:48:56.335340 containerd[1536]: time="2025-12-12T18:48:56.335154631Z" level=info msg="RemoveContainer for \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\"" Dec 12 18:48:56.338383 containerd[1536]: time="2025-12-12T18:48:56.338335303Z" level=info msg="RemoveContainer for \"1a72af087d97baa188667c2c75bf0735b8c9fc832388a5abdda7a891151625bc\" returns successfully" Dec 12 18:48:56.338606 kubelet[2719]: I1212 18:48:56.338474 2719 scope.go:117] "RemoveContainer" containerID="687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4" Dec 12 18:48:56.340137 containerd[1536]: time="2025-12-12T18:48:56.340056506Z" level=info msg="RemoveContainer for \"687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4\"" Dec 12 18:48:56.346675 containerd[1536]: time="2025-12-12T18:48:56.346598272Z" level=info msg="RemoveContainer for \"687f08c8d906497f66fe3ba6860bc26ee642dc4e99fcc0dd28529488c3eb26a4\" returns successfully" Dec 12 18:48:56.348126 kubelet[2719]: I1212 18:48:56.348077 2719 scope.go:117] "RemoveContainer" containerID="27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470" Dec 12 18:48:56.352378 containerd[1536]: time="2025-12-12T18:48:56.352343203Z" level=info msg="RemoveContainer for \"27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470\"" Dec 12 18:48:56.357102 containerd[1536]: time="2025-12-12T18:48:56.357043594Z" level=info msg="RemoveContainer for \"27c31353adb076de9a6b052a46c3d1bcc55541cfcaf56452dd95f51097c24470\" returns successfully" Dec 12 18:48:56.357259 kubelet[2719]: I1212 18:48:56.357225 2719 scope.go:117] "RemoveContainer" containerID="79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1" Dec 12 18:48:56.359181 containerd[1536]: time="2025-12-12T18:48:56.359158155Z" level=info msg="RemoveContainer for \"79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1\"" Dec 12 18:48:56.362457 containerd[1536]: time="2025-12-12T18:48:56.362423378Z" level=info msg="RemoveContainer for \"79a27d2a6e7895d43028f4c1576a3d0bae16c2c21af9a124654893a0f08132f1\" returns successfully" Dec 12 18:48:56.362777 kubelet[2719]: I1212 18:48:56.362637 2719 scope.go:117] "RemoveContainer" containerID="524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651" Dec 12 18:48:56.363954 containerd[1536]: time="2025-12-12T18:48:56.363927937Z" level=info msg="RemoveContainer for \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\"" Dec 12 18:48:56.366263 containerd[1536]: time="2025-12-12T18:48:56.366236102Z" level=info msg="RemoveContainer for \"524699b4d534fdce5ab5e62a375d6a5f2de530f7c0e8e17e290bd4d325b8c651\" returns successfully" Dec 12 18:48:56.367367 containerd[1536]: time="2025-12-12T18:48:56.367332303Z" level=info msg="StopPodSandbox for \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\"" Dec 12 18:48:56.367469 containerd[1536]: time="2025-12-12T18:48:56.367412115Z" level=info msg="TearDown network for sandbox \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" successfully" Dec 12 18:48:56.367469 containerd[1536]: time="2025-12-12T18:48:56.367422835Z" level=info msg="StopPodSandbox for \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" returns successfully" Dec 12 18:48:56.367883 containerd[1536]: time="2025-12-12T18:48:56.367857093Z" level=info msg="RemovePodSandbox for \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\"" Dec 12 18:48:56.367939 containerd[1536]: time="2025-12-12T18:48:56.367903574Z" level=info msg="Forcibly stopping sandbox \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\"" Dec 12 18:48:56.367996 containerd[1536]: time="2025-12-12T18:48:56.367977025Z" level=info msg="TearDown network for sandbox \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" successfully" Dec 12 18:48:56.369172 containerd[1536]: time="2025-12-12T18:48:56.369142408Z" level=info msg="Ensure that sandbox 9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9 in task-service has been cleanup successfully" Dec 12 18:48:56.370701 containerd[1536]: time="2025-12-12T18:48:56.370673538Z" level=info msg="RemovePodSandbox \"9603b0f849a7033b2c5341918feef2e2102748973a64e09cc097360049eb9bd9\" returns successfully" Dec 12 18:48:56.370981 containerd[1536]: time="2025-12-12T18:48:56.370953583Z" level=info msg="StopPodSandbox for \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\"" Dec 12 18:48:56.371052 containerd[1536]: time="2025-12-12T18:48:56.371024234Z" level=info msg="TearDown network for sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" successfully" Dec 12 18:48:56.371052 containerd[1536]: time="2025-12-12T18:48:56.371036445Z" level=info msg="StopPodSandbox for \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" returns successfully" Dec 12 18:48:56.371337 containerd[1536]: time="2025-12-12T18:48:56.371306870Z" level=info msg="RemovePodSandbox for \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\"" Dec 12 18:48:56.371337 containerd[1536]: time="2025-12-12T18:48:56.371332830Z" level=info msg="Forcibly stopping sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\"" Dec 12 18:48:56.371404 containerd[1536]: time="2025-12-12T18:48:56.371382851Z" level=info msg="TearDown network for sandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" successfully" Dec 12 18:48:56.372525 containerd[1536]: time="2025-12-12T18:48:56.372493933Z" level=info msg="Ensure that sandbox e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83 in task-service has been cleanup successfully" Dec 12 18:48:56.373983 containerd[1536]: time="2025-12-12T18:48:56.373960961Z" level=info msg="RemovePodSandbox \"e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83\" returns successfully" Dec 12 18:48:56.378536 kubelet[2719]: I1212 18:48:56.378502 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14fb9d37-3fc4-493e-a286-c30451214f46-cilium-config-path\") pod \"14fb9d37-3fc4-493e-a286-c30451214f46\" (UID: \"14fb9d37-3fc4-493e-a286-c30451214f46\") " Dec 12 18:48:56.378623 kubelet[2719]: I1212 18:48:56.378538 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql7cf\" (UniqueName: \"kubernetes.io/projected/14fb9d37-3fc4-493e-a286-c30451214f46-kube-api-access-ql7cf\") pod \"14fb9d37-3fc4-493e-a286-c30451214f46\" (UID: \"14fb9d37-3fc4-493e-a286-c30451214f46\") " Dec 12 18:48:56.378623 kubelet[2719]: I1212 18:48:56.378560 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-hubble-tls\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.378623 kubelet[2719]: I1212 18:48:56.378579 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-lib-modules\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.378991 kubelet[2719]: I1212 18:48:56.378633 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.382999 kubelet[2719]: I1212 18:48:56.382974 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14fb9d37-3fc4-493e-a286-c30451214f46-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "14fb9d37-3fc4-493e-a286-c30451214f46" (UID: "14fb9d37-3fc4-493e-a286-c30451214f46"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:48:56.384204 kubelet[2719]: I1212 18:48:56.384173 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:48:56.384768 kubelet[2719]: I1212 18:48:56.384734 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fb9d37-3fc4-493e-a286-c30451214f46-kube-api-access-ql7cf" (OuterVolumeSpecName: "kube-api-access-ql7cf") pod "14fb9d37-3fc4-493e-a286-c30451214f46" (UID: "14fb9d37-3fc4-493e-a286-c30451214f46"). InnerVolumeSpecName "kube-api-access-ql7cf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:48:56.455545 kubelet[2719]: E1212 18:48:56.455511 2719 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 18:48:56.479147 kubelet[2719]: I1212 18:48:56.479067 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-config-path\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479245 kubelet[2719]: I1212 18:48:56.479154 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-etc-cni-netd\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479245 kubelet[2719]: I1212 18:48:56.479172 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-host-proc-sys-kernel\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479245 kubelet[2719]: I1212 18:48:56.479187 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-run\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479245 kubelet[2719]: I1212 18:48:56.479202 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-xtables-lock\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479245 kubelet[2719]: I1212 18:48:56.479218 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cni-path\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479245 kubelet[2719]: I1212 18:48:56.479238 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5d663a3-5f91-4d81-a2b6-8af2427171bb-clustermesh-secrets\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479396 kubelet[2719]: I1212 18:48:56.479256 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkczl\" (UniqueName: \"kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-kube-api-access-fkczl\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479396 kubelet[2719]: I1212 18:48:56.479273 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-host-proc-sys-net\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479396 kubelet[2719]: I1212 18:48:56.479290 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-cgroup\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479396 kubelet[2719]: I1212 18:48:56.479305 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-hostproc\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479396 kubelet[2719]: I1212 18:48:56.479319 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-bpf-maps\") pod \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\" (UID: \"b5d663a3-5f91-4d81-a2b6-8af2427171bb\") " Dec 12 18:48:56.479396 kubelet[2719]: I1212 18:48:56.479352 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14fb9d37-3fc4-493e-a286-c30451214f46-cilium-config-path\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.479577 kubelet[2719]: I1212 18:48:56.479364 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ql7cf\" (UniqueName: \"kubernetes.io/projected/14fb9d37-3fc4-493e-a286-c30451214f46-kube-api-access-ql7cf\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.479577 kubelet[2719]: I1212 18:48:56.479376 2719 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-hubble-tls\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.479577 kubelet[2719]: I1212 18:48:56.479386 2719 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-lib-modules\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.479577 kubelet[2719]: I1212 18:48:56.479426 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.481125 kubelet[2719]: I1212 18:48:56.479741 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cni-path" (OuterVolumeSpecName: "cni-path") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.481125 kubelet[2719]: I1212 18:48:56.479779 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.481125 kubelet[2719]: I1212 18:48:56.479799 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.481125 kubelet[2719]: I1212 18:48:56.479815 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.481125 kubelet[2719]: I1212 18:48:56.479829 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.481265 kubelet[2719]: I1212 18:48:56.479843 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.481587 kubelet[2719]: I1212 18:48:56.481558 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.481627 kubelet[2719]: I1212 18:48:56.481589 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-hostproc" (OuterVolumeSpecName: "hostproc") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 12 18:48:56.483440 kubelet[2719]: I1212 18:48:56.483420 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5d663a3-5f91-4d81-a2b6-8af2427171bb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:48:56.486164 kubelet[2719]: I1212 18:48:56.486075 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-kube-api-access-fkczl" (OuterVolumeSpecName: "kube-api-access-fkczl") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "kube-api-access-fkczl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:48:56.486407 kubelet[2719]: I1212 18:48:56.486388 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5d663a3-5f91-4d81-a2b6-8af2427171bb" (UID: "b5d663a3-5f91-4d81-a2b6-8af2427171bb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:48:56.579628 kubelet[2719]: I1212 18:48:56.579491 2719 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-etc-cni-netd\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579628 kubelet[2719]: I1212 18:48:56.579539 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-host-proc-sys-kernel\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579628 kubelet[2719]: I1212 18:48:56.579552 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-run\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579628 kubelet[2719]: I1212 18:48:56.579561 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-config-path\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579628 kubelet[2719]: I1212 18:48:56.579570 2719 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-xtables-lock\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579628 kubelet[2719]: I1212 18:48:56.579579 2719 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cni-path\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579628 kubelet[2719]: I1212 18:48:56.579587 2719 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5d663a3-5f91-4d81-a2b6-8af2427171bb-clustermesh-secrets\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579628 kubelet[2719]: I1212 18:48:56.579596 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fkczl\" (UniqueName: \"kubernetes.io/projected/b5d663a3-5f91-4d81-a2b6-8af2427171bb-kube-api-access-fkczl\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579952 kubelet[2719]: I1212 18:48:56.579605 2719 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-host-proc-sys-net\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579952 kubelet[2719]: I1212 18:48:56.579614 2719 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-hostproc\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579952 kubelet[2719]: I1212 18:48:56.579623 2719 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-cilium-cgroup\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.579952 kubelet[2719]: I1212 18:48:56.579632 2719 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5d663a3-5f91-4d81-a2b6-8af2427171bb-bpf-maps\") on node \"172-237-139-125\" DevicePath \"\"" Dec 12 18:48:56.824005 systemd[1]: Removed slice kubepods-besteffort-pod14fb9d37_3fc4_493e_a286_c30451214f46.slice - libcontainer container kubepods-besteffort-pod14fb9d37_3fc4_493e_a286_c30451214f46.slice. Dec 12 18:48:56.831205 systemd[1]: Removed slice kubepods-burstable-podb5d663a3_5f91_4d81_a2b6_8af2427171bb.slice - libcontainer container kubepods-burstable-podb5d663a3_5f91_4d81_a2b6_8af2427171bb.slice. Dec 12 18:48:56.831356 systemd[1]: kubepods-burstable-podb5d663a3_5f91_4d81_a2b6_8af2427171bb.slice: Consumed 7.108s CPU time, 126.6M memory peak, 128K read from disk, 13.3M written to disk. Dec 12 18:48:57.110146 systemd[1]: var-lib-kubelet-pods-14fb9d37\x2d3fc4\x2d493e\x2da286\x2dc30451214f46-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dql7cf.mount: Deactivated successfully. Dec 12 18:48:57.110280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0731f7e1c413d4e4bc6c445e7a740407ee18d3228098a8cf797150753967a83-shm.mount: Deactivated successfully. Dec 12 18:48:57.110355 systemd[1]: var-lib-kubelet-pods-b5d663a3\x2d5f91\x2d4d81\x2da2b6\x2d8af2427171bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfkczl.mount: Deactivated successfully. Dec 12 18:48:57.110432 systemd[1]: var-lib-kubelet-pods-b5d663a3\x2d5f91\x2d4d81\x2da2b6\x2d8af2427171bb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 12 18:48:57.111029 systemd[1]: var-lib-kubelet-pods-b5d663a3\x2d5f91\x2d4d81\x2da2b6\x2d8af2427171bb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 12 18:48:58.039719 sshd[4267]: Connection closed by 139.178.68.195 port 50896 Dec 12 18:48:58.040487 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:58.045365 systemd-logind[1528]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:48:58.046572 systemd[1]: sshd@21-172.237.139.125:22-139.178.68.195:50896.service: Deactivated successfully. Dec 12 18:48:58.048981 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:48:58.051185 systemd-logind[1528]: Removed session 22. Dec 12 18:48:58.100446 systemd[1]: Started sshd@22-172.237.139.125:22-139.178.68.195:50910.service - OpenSSH per-connection server daemon (139.178.68.195:50910). Dec 12 18:48:58.344430 kubelet[2719]: I1212 18:48:58.344046 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14fb9d37-3fc4-493e-a286-c30451214f46" path="/var/lib/kubelet/pods/14fb9d37-3fc4-493e-a286-c30451214f46/volumes" Dec 12 18:48:58.345479 kubelet[2719]: I1212 18:48:58.345073 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d663a3-5f91-4d81-a2b6-8af2427171bb" path="/var/lib/kubelet/pods/b5d663a3-5f91-4d81-a2b6-8af2427171bb/volumes" Dec 12 18:48:58.437567 sshd[4412]: Accepted publickey for core from 139.178.68.195 port 50910 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:58.439135 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:58.445305 systemd-logind[1528]: New session 23 of user core. Dec 12 18:48:58.451264 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:48:59.479250 kubelet[2719]: I1212 18:48:59.478741 2719 memory_manager.go:355] "RemoveStaleState removing state" podUID="b5d663a3-5f91-4d81-a2b6-8af2427171bb" containerName="cilium-agent" Dec 12 18:48:59.479250 kubelet[2719]: I1212 18:48:59.478766 2719 memory_manager.go:355] "RemoveStaleState removing state" podUID="14fb9d37-3fc4-493e-a286-c30451214f46" containerName="cilium-operator" Dec 12 18:48:59.482292 kubelet[2719]: W1212 18:48:59.482268 2719 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-237-139-125" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-237-139-125' and this object Dec 12 18:48:59.482454 kubelet[2719]: E1212 18:48:59.482312 2719 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-237-139-125\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-237-139-125' and this object" logger="UnhandledError" Dec 12 18:48:59.486840 sshd[4415]: Connection closed by 139.178.68.195 port 50910 Dec 12 18:48:59.488777 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Dec 12 18:48:59.494001 systemd[1]: Created slice kubepods-burstable-podbb42ddf0_3203_46e1_9853_343ac4146a1a.slice - libcontainer container kubepods-burstable-podbb42ddf0_3203_46e1_9853_343ac4146a1a.slice. Dec 12 18:48:59.496140 kubelet[2719]: I1212 18:48:59.496002 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb42ddf0-3203-46e1-9853-343ac4146a1a-cilium-config-path\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.496140 kubelet[2719]: I1212 18:48:59.496029 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkhsv\" (UniqueName: \"kubernetes.io/projected/bb42ddf0-3203-46e1-9853-343ac4146a1a-kube-api-access-qkhsv\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.496140 kubelet[2719]: I1212 18:48:59.496045 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bb42ddf0-3203-46e1-9853-343ac4146a1a-cilium-ipsec-secrets\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.496140 kubelet[2719]: I1212 18:48:59.496057 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-host-proc-sys-kernel\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.496140 kubelet[2719]: I1212 18:48:59.496071 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-etc-cni-netd\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.497476 kubelet[2719]: I1212 18:48:59.496085 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-cilium-cgroup\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.497476 kubelet[2719]: I1212 18:48:59.496099 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-host-proc-sys-net\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.499606 kubelet[2719]: I1212 18:48:59.499569 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb42ddf0-3203-46e1-9853-343ac4146a1a-hubble-tls\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.500375 kubelet[2719]: I1212 18:48:59.499832 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-hostproc\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.500375 kubelet[2719]: I1212 18:48:59.500078 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-cni-path\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.500375 kubelet[2719]: I1212 18:48:59.500093 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-lib-modules\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.500375 kubelet[2719]: I1212 18:48:59.500156 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-bpf-maps\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.500375 kubelet[2719]: I1212 18:48:59.500172 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-cilium-run\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.500375 kubelet[2719]: I1212 18:48:59.500185 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb42ddf0-3203-46e1-9853-343ac4146a1a-xtables-lock\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.500531 kubelet[2719]: I1212 18:48:59.500229 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb42ddf0-3203-46e1-9853-343ac4146a1a-clustermesh-secrets\") pod \"cilium-5gvdr\" (UID: \"bb42ddf0-3203-46e1-9853-343ac4146a1a\") " pod="kube-system/cilium-5gvdr" Dec 12 18:48:59.504675 systemd[1]: sshd@22-172.237.139.125:22-139.178.68.195:50910.service: Deactivated successfully. Dec 12 18:48:59.507881 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:48:59.510223 systemd-logind[1528]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:48:59.513065 systemd-logind[1528]: Removed session 23. Dec 12 18:48:59.546962 systemd[1]: Started sshd@23-172.237.139.125:22-139.178.68.195:50914.service - OpenSSH per-connection server daemon (139.178.68.195:50914). Dec 12 18:48:59.880039 sshd[4425]: Accepted publickey for core from 139.178.68.195 port 50914 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:48:59.881649 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:48:59.886944 systemd-logind[1528]: New session 24 of user core. Dec 12 18:48:59.892249 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:49:00.122636 sshd[4432]: Connection closed by 139.178.68.195 port 50914 Dec 12 18:49:00.123479 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Dec 12 18:49:00.128174 systemd-logind[1528]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:49:00.128378 systemd[1]: sshd@23-172.237.139.125:22-139.178.68.195:50914.service: Deactivated successfully. Dec 12 18:49:00.130346 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:49:00.132968 systemd-logind[1528]: Removed session 24. Dec 12 18:49:00.188223 systemd[1]: Started sshd@24-172.237.139.125:22-139.178.68.195:59952.service - OpenSSH per-connection server daemon (139.178.68.195:59952). Dec 12 18:49:00.471650 kubelet[2719]: I1212 18:49:00.470735 2719 setters.go:602] "Node became not ready" node="172-237-139-125" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-12T18:49:00Z","lastTransitionTime":"2025-12-12T18:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 12 18:49:00.541745 sshd[4439]: Accepted publickey for core from 139.178.68.195 port 59952 ssh2: RSA SHA256:UiG6bZktVSf6i9iX5VzUPVpLWgzfdn4YYRb2mBOdRlA Dec 12 18:49:00.544006 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:49:00.551184 systemd-logind[1528]: New session 25 of user core. Dec 12 18:49:00.558299 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:49:00.601363 kubelet[2719]: E1212 18:49:00.601319 2719 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 12 18:49:00.601732 kubelet[2719]: E1212 18:49:00.601434 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb42ddf0-3203-46e1-9853-343ac4146a1a-cilium-config-path podName:bb42ddf0-3203-46e1-9853-343ac4146a1a nodeName:}" failed. No retries permitted until 2025-12-12 18:49:01.101415171 +0000 UTC m=+184.883953764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/bb42ddf0-3203-46e1-9853-343ac4146a1a-cilium-config-path") pod "cilium-5gvdr" (UID: "bb42ddf0-3203-46e1-9853-343ac4146a1a") : failed to sync configmap cache: timed out waiting for the condition Dec 12 18:49:01.303140 kubelet[2719]: E1212 18:49:01.302981 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:01.304213 containerd[1536]: time="2025-12-12T18:49:01.303603227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5gvdr,Uid:bb42ddf0-3203-46e1-9853-343ac4146a1a,Namespace:kube-system,Attempt:0,}" Dec 12 18:49:01.321604 containerd[1536]: time="2025-12-12T18:49:01.321564286Z" level=info msg="connecting to shim 493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995" address="unix:///run/containerd/s/7147022f2bfbb0f197d665f5a5d1b10116fcec0fa4c038f871da4a25a0f2045d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:49:01.353278 systemd[1]: Started cri-containerd-493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995.scope - libcontainer container 493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995. Dec 12 18:49:01.385925 containerd[1536]: time="2025-12-12T18:49:01.385852131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5gvdr,Uid:bb42ddf0-3203-46e1-9853-343ac4146a1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\"" Dec 12 18:49:01.387024 kubelet[2719]: E1212 18:49:01.387001 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:01.390738 containerd[1536]: time="2025-12-12T18:49:01.390710012Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 12 18:49:01.408169 containerd[1536]: time="2025-12-12T18:49:01.405241167Z" level=info msg="Container bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:49:01.410397 containerd[1536]: time="2025-12-12T18:49:01.410366624Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea\"" Dec 12 18:49:01.410989 containerd[1536]: time="2025-12-12T18:49:01.410970665Z" level=info msg="StartContainer for \"bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea\"" Dec 12 18:49:01.412019 containerd[1536]: time="2025-12-12T18:49:01.411988784Z" level=info msg="connecting to shim bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea" address="unix:///run/containerd/s/7147022f2bfbb0f197d665f5a5d1b10116fcec0fa4c038f871da4a25a0f2045d" protocol=ttrpc version=3 Dec 12 18:49:01.436228 systemd[1]: Started cri-containerd-bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea.scope - libcontainer container bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea. Dec 12 18:49:01.457380 kubelet[2719]: E1212 18:49:01.457351 2719 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 12 18:49:01.467379 containerd[1536]: time="2025-12-12T18:49:01.467337620Z" level=info msg="StartContainer for \"bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea\" returns successfully" Dec 12 18:49:01.479066 systemd[1]: cri-containerd-bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea.scope: Deactivated successfully. Dec 12 18:49:01.482712 containerd[1536]: time="2025-12-12T18:49:01.482683880Z" level=info msg="received container exit event container_id:\"bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea\" id:\"bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea\" pid:4506 exited_at:{seconds:1765565341 nanos:482421585}" Dec 12 18:49:01.831478 kubelet[2719]: E1212 18:49:01.831429 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:01.833185 containerd[1536]: time="2025-12-12T18:49:01.833145300Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 12 18:49:01.842091 containerd[1536]: time="2025-12-12T18:49:01.842055718Z" level=info msg="Container 91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:49:01.846685 containerd[1536]: time="2025-12-12T18:49:01.846661305Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d\"" Dec 12 18:49:01.847214 containerd[1536]: time="2025-12-12T18:49:01.847193975Z" level=info msg="StartContainer for \"91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d\"" Dec 12 18:49:01.848712 containerd[1536]: time="2025-12-12T18:49:01.848676823Z" level=info msg="connecting to shim 91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d" address="unix:///run/containerd/s/7147022f2bfbb0f197d665f5a5d1b10116fcec0fa4c038f871da4a25a0f2045d" protocol=ttrpc version=3 Dec 12 18:49:01.873250 systemd[1]: Started cri-containerd-91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d.scope - libcontainer container 91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d. Dec 12 18:49:01.906519 containerd[1536]: time="2025-12-12T18:49:01.906452255Z" level=info msg="StartContainer for \"91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d\" returns successfully" Dec 12 18:49:01.914821 systemd[1]: cri-containerd-91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d.scope: Deactivated successfully. Dec 12 18:49:01.917325 containerd[1536]: time="2025-12-12T18:49:01.917175177Z" level=info msg="received container exit event container_id:\"91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d\" id:\"91bbc0457a262c4f1a5c09ccdfde0865ca1dd7389d3176e8b13387e9e08f5b5d\" pid:4550 exited_at:{seconds:1765565341 nanos:915481855}" Dec 12 18:49:02.313462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb386891b2b9473f7d759fddcd2523f3b61130e8a1e1719d87db5347e055a3ea-rootfs.mount: Deactivated successfully. Dec 12 18:49:02.835265 kubelet[2719]: E1212 18:49:02.834591 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:02.838279 containerd[1536]: time="2025-12-12T18:49:02.838182088Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 12 18:49:02.853382 containerd[1536]: time="2025-12-12T18:49:02.850606182Z" level=info msg="Container 69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:49:02.857357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953188675.mount: Deactivated successfully. Dec 12 18:49:02.864033 containerd[1536]: time="2025-12-12T18:49:02.864001834Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37\"" Dec 12 18:49:02.865405 containerd[1536]: time="2025-12-12T18:49:02.864471723Z" level=info msg="StartContainer for \"69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37\"" Dec 12 18:49:02.866609 containerd[1536]: time="2025-12-12T18:49:02.866583742Z" level=info msg="connecting to shim 69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37" address="unix:///run/containerd/s/7147022f2bfbb0f197d665f5a5d1b10116fcec0fa4c038f871da4a25a0f2045d" protocol=ttrpc version=3 Dec 12 18:49:02.894240 systemd[1]: Started cri-containerd-69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37.scope - libcontainer container 69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37. Dec 12 18:49:02.968547 systemd[1]: cri-containerd-69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37.scope: Deactivated successfully. Dec 12 18:49:02.969058 containerd[1536]: time="2025-12-12T18:49:02.969034199Z" level=info msg="StartContainer for \"69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37\" returns successfully" Dec 12 18:49:02.971967 containerd[1536]: time="2025-12-12T18:49:02.971935074Z" level=info msg="received container exit event container_id:\"69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37\" id:\"69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37\" pid:4598 exited_at:{seconds:1765565342 nanos:971737640}" Dec 12 18:49:02.999718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69576b6b1eef3109688ec79b2968e004c30f4b4c74c5e0f7af26302c689c9b37-rootfs.mount: Deactivated successfully. Dec 12 18:49:03.839652 kubelet[2719]: E1212 18:49:03.839586 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:03.845191 containerd[1536]: time="2025-12-12T18:49:03.843738499Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 12 18:49:03.854290 containerd[1536]: time="2025-12-12T18:49:03.854260666Z" level=info msg="Container 87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:49:03.863588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103027449.mount: Deactivated successfully. Dec 12 18:49:03.866929 containerd[1536]: time="2025-12-12T18:49:03.866894663Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8\"" Dec 12 18:49:03.867487 containerd[1536]: time="2025-12-12T18:49:03.867454923Z" level=info msg="StartContainer for \"87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8\"" Dec 12 18:49:03.868810 containerd[1536]: time="2025-12-12T18:49:03.868783138Z" level=info msg="connecting to shim 87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8" address="unix:///run/containerd/s/7147022f2bfbb0f197d665f5a5d1b10116fcec0fa4c038f871da4a25a0f2045d" protocol=ttrpc version=3 Dec 12 18:49:03.897228 systemd[1]: Started cri-containerd-87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8.scope - libcontainer container 87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8. Dec 12 18:49:03.928869 systemd[1]: cri-containerd-87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8.scope: Deactivated successfully. Dec 12 18:49:03.931490 containerd[1536]: time="2025-12-12T18:49:03.931346710Z" level=info msg="received container exit event container_id:\"87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8\" id:\"87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8\" pid:4639 exited_at:{seconds:1765565343 nanos:930444683}" Dec 12 18:49:03.941072 containerd[1536]: time="2025-12-12T18:49:03.941051522Z" level=info msg="StartContainer for \"87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8\" returns successfully" Dec 12 18:49:03.955938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87e1eed6a88dbe3c70969492b8423caa70e86c762b5be6371a5faed63a82ebe8-rootfs.mount: Deactivated successfully. Dec 12 18:49:04.341237 kubelet[2719]: E1212 18:49:04.340897 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-pf74x" podUID="1f953c43-2b48-4560-b841-b0f4b9f95480" Dec 12 18:49:04.847947 kubelet[2719]: E1212 18:49:04.846951 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:04.852344 containerd[1536]: time="2025-12-12T18:49:04.851660479Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 12 18:49:04.866518 containerd[1536]: time="2025-12-12T18:49:04.866458685Z" level=info msg="Container 4eac7ac31a7ff07bdced8409eddd04d1bfdd8b8b6c8470c2fcc4554105554f17: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:49:04.877688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648232642.mount: Deactivated successfully. Dec 12 18:49:04.881569 containerd[1536]: time="2025-12-12T18:49:04.881536056Z" level=info msg="CreateContainer within sandbox \"493b4788a8abe0ebdf8ffff84db5dc7942aa9caaad2593a1ed5a257e2102a995\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4eac7ac31a7ff07bdced8409eddd04d1bfdd8b8b6c8470c2fcc4554105554f17\"" Dec 12 18:49:04.883194 containerd[1536]: time="2025-12-12T18:49:04.882215059Z" level=info msg="StartContainer for \"4eac7ac31a7ff07bdced8409eddd04d1bfdd8b8b6c8470c2fcc4554105554f17\"" Dec 12 18:49:04.883596 containerd[1536]: time="2025-12-12T18:49:04.883534743Z" level=info msg="connecting to shim 4eac7ac31a7ff07bdced8409eddd04d1bfdd8b8b6c8470c2fcc4554105554f17" address="unix:///run/containerd/s/7147022f2bfbb0f197d665f5a5d1b10116fcec0fa4c038f871da4a25a0f2045d" protocol=ttrpc version=3 Dec 12 18:49:04.908238 systemd[1]: Started cri-containerd-4eac7ac31a7ff07bdced8409eddd04d1bfdd8b8b6c8470c2fcc4554105554f17.scope - libcontainer container 4eac7ac31a7ff07bdced8409eddd04d1bfdd8b8b6c8470c2fcc4554105554f17. Dec 12 18:49:04.963664 containerd[1536]: time="2025-12-12T18:49:04.963625128Z" level=info msg="StartContainer for \"4eac7ac31a7ff07bdced8409eddd04d1bfdd8b8b6c8470c2fcc4554105554f17\" returns successfully" Dec 12 18:49:05.446241 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Dec 12 18:49:05.854661 kubelet[2719]: E1212 18:49:05.854627 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:06.341583 kubelet[2719]: E1212 18:49:06.340503 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-pf74x" podUID="1f953c43-2b48-4560-b841-b0f4b9f95480" Dec 12 18:49:07.304271 kubelet[2719]: E1212 18:49:07.304232 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:08.341813 kubelet[2719]: E1212 18:49:08.341448 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:08.509803 systemd-networkd[1443]: lxc_health: Link UP Dec 12 18:49:08.519660 systemd-networkd[1443]: lxc_health: Gained carrier Dec 12 18:49:09.307160 kubelet[2719]: E1212 18:49:09.306956 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:09.328802 kubelet[2719]: I1212 18:49:09.328749 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5gvdr" podStartSLOduration=10.328731989 podStartE2EDuration="10.328731989s" podCreationTimestamp="2025-12-12 18:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:49:05.879753 +0000 UTC m=+189.662291593" watchObservedRunningTime="2025-12-12 18:49:09.328731989 +0000 UTC m=+193.111270582" Dec 12 18:49:09.340795 kubelet[2719]: E1212 18:49:09.340761 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:09.629381 systemd-networkd[1443]: lxc_health: Gained IPv6LL Dec 12 18:49:09.865231 kubelet[2719]: E1212 18:49:09.865181 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:10.867214 kubelet[2719]: E1212 18:49:10.867165 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Dec 12 18:49:15.600624 sshd[4442]: Connection closed by 139.178.68.195 port 59952 Dec 12 18:49:15.601700 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Dec 12 18:49:15.606390 systemd-logind[1528]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:49:15.607058 systemd[1]: sshd@24-172.237.139.125:22-139.178.68.195:59952.service: Deactivated successfully. Dec 12 18:49:15.609802 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:49:15.611658 systemd-logind[1528]: Removed session 25. Dec 12 18:49:16.341840 kubelet[2719]: E1212 18:49:16.340959 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20"