Nov 24 00:40:27.922148 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 00:40:27.922172 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:40:27.922181 kernel: BIOS-provided physical RAM map: Nov 24 00:40:27.922188 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 24 00:40:27.922194 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 24 00:40:27.922200 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 24 00:40:27.922209 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 24 00:40:27.922215 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 24 00:40:27.922221 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 24 00:40:27.922227 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 24 00:40:27.922233 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 00:40:27.922239 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 24 00:40:27.922245 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 24 00:40:27.922251 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 24 00:40:27.922261 kernel: NX (Execute Disable) protection: active Nov 24 00:40:27.922267 kernel: APIC: Static calls initialized Nov 24 00:40:27.922273 kernel: SMBIOS 2.8 present. Nov 24 00:40:27.922280 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 24 00:40:27.922286 kernel: DMI: Memory slots populated: 1/1 Nov 24 00:40:27.922292 kernel: Hypervisor detected: KVM Nov 24 00:40:27.922301 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 24 00:40:27.922307 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 00:40:27.922313 kernel: kvm-clock: using sched offset of 7093078260 cycles Nov 24 00:40:27.922320 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 00:40:27.922327 kernel: tsc: Detected 2000.000 MHz processor Nov 24 00:40:27.922333 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:40:27.922340 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:40:27.922347 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 24 00:40:27.922354 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 24 00:40:27.922361 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:40:27.922383 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 24 00:40:27.922390 kernel: Using GB pages for direct mapping Nov 24 00:40:27.922396 kernel: ACPI: Early table checksum verification disabled Nov 24 00:40:27.922403 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 24 00:40:27.922410 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:40:27.922416 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:40:27.922423 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:40:27.922429 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 24 00:40:27.922436 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:40:27.922445 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:40:27.922455 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:40:27.922462 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 00:40:27.922469 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 24 00:40:27.922476 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 24 00:40:27.922485 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 24 00:40:27.922491 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 24 00:40:27.925575 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 24 00:40:27.925590 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 24 00:40:27.925599 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 24 00:40:27.925606 kernel: No NUMA configuration found Nov 24 00:40:27.925613 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 24 00:40:27.925621 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Nov 24 00:40:27.925628 kernel: Zone ranges: Nov 24 00:40:27.925654 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:40:27.925661 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 24 00:40:27.925690 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 24 00:40:27.925707 kernel: Device empty Nov 24 00:40:27.925725 kernel: Movable zone start for each node Nov 24 00:40:27.925733 kernel: Early memory node ranges Nov 24 00:40:27.925740 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 24 00:40:27.925747 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 24 00:40:27.925754 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 24 00:40:27.925763 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 24 00:40:27.925770 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:40:27.925777 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 00:40:27.925784 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 24 00:40:27.925791 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 00:40:27.925798 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 00:40:27.925805 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 00:40:27.925812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 00:40:27.925819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 00:40:27.925828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:40:27.925835 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 00:40:27.925842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 00:40:27.925849 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:40:27.925856 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 00:40:27.925863 kernel: TSC deadline timer available Nov 24 00:40:27.925870 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:40:27.925877 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:40:27.925884 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:40:27.925890 kernel: CPU topo: Max. threads per core: 1 Nov 24 00:40:27.925899 kernel: CPU topo: Num. cores per package: 2 Nov 24 00:40:27.925906 kernel: CPU topo: Num. threads per package: 2 Nov 24 00:40:27.925913 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 00:40:27.925920 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 00:40:27.925927 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 24 00:40:27.925933 kernel: kvm-guest: setup PV sched yield Nov 24 00:40:27.925941 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 24 00:40:27.925948 kernel: Booting paravirtualized kernel on KVM Nov 24 00:40:27.925955 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:40:27.925964 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 00:40:27.925971 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 00:40:27.925978 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 00:40:27.925985 kernel: pcpu-alloc: [0] 0 1 Nov 24 00:40:27.925992 kernel: kvm-guest: PV spinlocks enabled Nov 24 00:40:27.925999 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:40:27.926007 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:40:27.926014 kernel: random: crng init done Nov 24 00:40:27.926023 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 00:40:27.926030 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 00:40:27.926037 kernel: Fallback order for Node 0: 0 Nov 24 00:40:27.926044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 24 00:40:27.926051 kernel: Policy zone: Normal Nov 24 00:40:27.926058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:40:27.926065 kernel: software IO TLB: area num 2. Nov 24 00:40:27.926072 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 00:40:27.926079 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:40:27.926088 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:40:27.926095 kernel: Dynamic Preempt: voluntary Nov 24 00:40:27.926102 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:40:27.926109 kernel: rcu: RCU event tracing is enabled. Nov 24 00:40:27.926117 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 00:40:27.926124 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:40:27.926131 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:40:27.926138 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:40:27.926145 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:40:27.926154 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 00:40:27.926161 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:40:27.926175 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:40:27.926184 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:40:27.926191 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 24 00:40:27.926198 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:40:27.926206 kernel: Console: colour VGA+ 80x25 Nov 24 00:40:27.926213 kernel: printk: legacy console [tty0] enabled Nov 24 00:40:27.926221 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:40:27.926228 kernel: ACPI: Core revision 20240827 Nov 24 00:40:27.926237 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 24 00:40:27.926245 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:40:27.926252 kernel: x2apic enabled Nov 24 00:40:27.926259 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:40:27.926266 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 24 00:40:27.926273 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 24 00:40:27.926281 kernel: kvm-guest: setup PV IPIs Nov 24 00:40:27.926290 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 24 00:40:27.926298 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 24 00:40:27.926305 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Nov 24 00:40:27.926313 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 00:40:27.926320 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 24 00:40:27.926327 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 24 00:40:27.926334 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:40:27.926342 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:40:27.926349 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:40:27.926358 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 24 00:40:27.926365 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 00:40:27.926373 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 00:40:27.926380 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 24 00:40:27.926388 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 24 00:40:27.926395 kernel: active return thunk: srso_alias_return_thunk Nov 24 00:40:27.926403 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 24 00:40:27.926410 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 24 00:40:27.926419 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 00:40:27.926427 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:40:27.926434 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:40:27.926441 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:40:27.926448 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 24 00:40:27.926456 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:40:27.926463 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 24 00:40:27.926470 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 24 00:40:27.926477 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:40:27.926487 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:40:27.926494 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:40:27.926501 kernel: landlock: Up and running. Nov 24 00:40:27.926508 kernel: SELinux: Initializing. Nov 24 00:40:27.926515 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 00:40:27.926523 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 00:40:27.926530 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 24 00:40:27.926537 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 24 00:40:27.926545 kernel: ... version: 0 Nov 24 00:40:27.926554 kernel: ... bit width: 48 Nov 24 00:40:27.926561 kernel: ... generic registers: 6 Nov 24 00:40:27.926568 kernel: ... value mask: 0000ffffffffffff Nov 24 00:40:27.926575 kernel: ... max period: 00007fffffffffff Nov 24 00:40:27.926582 kernel: ... fixed-purpose events: 0 Nov 24 00:40:27.926590 kernel: ... event mask: 000000000000003f Nov 24 00:40:27.926597 kernel: signal: max sigframe size: 3376 Nov 24 00:40:27.926604 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:40:27.926611 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:40:27.926621 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:40:27.926628 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:40:27.926676 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:40:27.926683 kernel: .... node #0, CPUs: #1 Nov 24 00:40:27.926691 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 00:40:27.926698 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 24 00:40:27.926706 kernel: Memory: 3952856K/4193772K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 235488K reserved, 0K cma-reserved) Nov 24 00:40:27.926713 kernel: devtmpfs: initialized Nov 24 00:40:27.926720 kernel: x86/mm: Memory block size: 128MB Nov 24 00:40:27.926730 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:40:27.926737 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 00:40:27.926745 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:40:27.926752 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:40:27.926759 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:40:27.926766 kernel: audit: type=2000 audit(1763944825.241:1): state=initialized audit_enabled=0 res=1 Nov 24 00:40:27.926773 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:40:27.926780 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:40:27.926787 kernel: cpuidle: using governor menu Nov 24 00:40:27.926796 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:40:27.926803 kernel: dca service started, version 1.12.1 Nov 24 00:40:27.926810 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 24 00:40:27.926817 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 24 00:40:27.926824 kernel: PCI: Using configuration type 1 for base access Nov 24 00:40:27.926831 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:40:27.926839 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:40:27.926846 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:40:27.926853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:40:27.926862 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:40:27.926869 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:40:27.926875 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:40:27.926882 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:40:27.926889 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 00:40:27.926896 kernel: ACPI: Interpreter enabled Nov 24 00:40:27.926903 kernel: ACPI: PM: (supports S0 S3 S5) Nov 24 00:40:27.926910 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:40:27.926917 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:40:27.926926 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 00:40:27.926933 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 24 00:40:27.926940 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 00:40:27.927134 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 24 00:40:27.927266 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 24 00:40:27.927391 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 24 00:40:27.927401 kernel: PCI host bridge to bus 0000:00 Nov 24 00:40:27.927531 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 00:40:27.927684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 00:40:27.927805 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 00:40:27.927919 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 24 00:40:27.928031 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 24 00:40:27.928144 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 24 00:40:27.928255 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 00:40:27.928403 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 24 00:40:27.928542 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 00:40:27.928702 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 24 00:40:27.928830 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 24 00:40:27.928952 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 24 00:40:27.929072 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 00:40:27.929210 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 24 00:40:27.929333 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 24 00:40:27.929454 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 24 00:40:27.929601 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 24 00:40:27.931782 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 00:40:27.931916 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 24 00:40:27.932039 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 24 00:40:27.932166 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 24 00:40:27.932288 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 24 00:40:27.932420 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 24 00:40:27.932541 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 24 00:40:27.932689 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 24 00:40:27.932813 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 24 00:40:27.932933 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 24 00:40:27.933064 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 24 00:40:27.933184 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 24 00:40:27.933194 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 00:40:27.933202 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 00:40:27.933209 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 00:40:27.933216 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 00:40:27.933223 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 24 00:40:27.933233 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 24 00:40:27.933241 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 24 00:40:27.933248 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 24 00:40:27.933255 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 24 00:40:27.933262 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 24 00:40:27.933269 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 24 00:40:27.933276 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 24 00:40:27.933283 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 24 00:40:27.933290 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 24 00:40:27.933299 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 24 00:40:27.933306 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 24 00:40:27.933313 kernel: iommu: Default domain type: Translated Nov 24 00:40:27.933320 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:40:27.933327 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:40:27.933334 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 00:40:27.933341 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 24 00:40:27.933348 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 24 00:40:27.933466 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 24 00:40:27.933589 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 24 00:40:27.938306 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 00:40:27.938322 kernel: vgaarb: loaded Nov 24 00:40:27.938330 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 24 00:40:27.938338 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 24 00:40:27.938345 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 00:40:27.938352 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:40:27.938360 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:40:27.938367 kernel: pnp: PnP ACPI init Nov 24 00:40:27.938514 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 24 00:40:27.938525 kernel: pnp: PnP ACPI: found 5 devices Nov 24 00:40:27.938533 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:40:27.938540 kernel: NET: Registered PF_INET protocol family Nov 24 00:40:27.938548 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 00:40:27.938555 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 24 00:40:27.938562 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:40:27.938570 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 00:40:27.938581 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 24 00:40:27.938588 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 24 00:40:27.938595 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 00:40:27.938602 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 00:40:27.938610 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:40:27.938617 kernel: NET: Registered PF_XDP protocol family Nov 24 00:40:27.938751 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 00:40:27.938865 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 00:40:27.938976 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 00:40:27.939092 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 24 00:40:27.939202 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 24 00:40:27.939314 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 24 00:40:27.939324 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:40:27.939332 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 24 00:40:27.939339 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 24 00:40:27.939347 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 24 00:40:27.939354 kernel: Initialise system trusted keyrings Nov 24 00:40:27.939365 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 24 00:40:27.939372 kernel: Key type asymmetric registered Nov 24 00:40:27.939380 kernel: Asymmetric key parser 'x509' registered Nov 24 00:40:27.939387 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:40:27.939395 kernel: io scheduler mq-deadline registered Nov 24 00:40:27.939402 kernel: io scheduler kyber registered Nov 24 00:40:27.939409 kernel: io scheduler bfq registered Nov 24 00:40:27.939417 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:40:27.939424 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 24 00:40:27.939434 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 24 00:40:27.939441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:40:27.939448 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:40:27.939456 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 00:40:27.939463 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 00:40:27.939470 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 00:40:27.939478 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 00:40:27.939848 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 24 00:40:27.939977 kernel: rtc_cmos 00:03: registered as rtc0 Nov 24 00:40:27.940096 kernel: rtc_cmos 00:03: setting system clock to 2025-11-24T00:40:27 UTC (1763944827) Nov 24 00:40:27.940218 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 24 00:40:27.940228 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 24 00:40:27.940235 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:40:27.940242 kernel: Segment Routing with IPv6 Nov 24 00:40:27.940249 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:40:27.940256 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:40:27.940263 kernel: Key type dns_resolver registered Nov 24 00:40:27.940273 kernel: IPI shorthand broadcast: enabled Nov 24 00:40:27.940280 kernel: sched_clock: Marking stable (2831050910, 342693620)->(3258783220, -85038690) Nov 24 00:40:27.940287 kernel: registered taskstats version 1 Nov 24 00:40:27.940294 kernel: Loading compiled-in X.509 certificates Nov 24 00:40:27.940301 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 00:40:27.940309 kernel: Demotion targets for Node 0: null Nov 24 00:40:27.940316 kernel: Key type .fscrypt registered Nov 24 00:40:27.940322 kernel: Key type fscrypt-provisioning registered Nov 24 00:40:27.940329 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:40:27.940339 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:40:27.940346 kernel: ima: No architecture policies found Nov 24 00:40:27.940354 kernel: clk: Disabling unused clocks Nov 24 00:40:27.940361 kernel: Warning: unable to open an initial console. Nov 24 00:40:27.940368 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 00:40:27.940375 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:40:27.940382 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:40:27.940389 kernel: Run /init as init process Nov 24 00:40:27.940396 kernel: with arguments: Nov 24 00:40:27.940405 kernel: /init Nov 24 00:40:27.940412 kernel: with environment: Nov 24 00:40:27.940433 kernel: HOME=/ Nov 24 00:40:27.940442 kernel: TERM=linux Nov 24 00:40:27.940451 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:40:27.940461 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:40:27.940469 systemd[1]: Detected virtualization kvm. Nov 24 00:40:27.940479 systemd[1]: Detected architecture x86-64. Nov 24 00:40:27.940486 systemd[1]: Running in initrd. Nov 24 00:40:27.940493 systemd[1]: No hostname configured, using default hostname. Nov 24 00:40:27.940501 systemd[1]: Hostname set to . Nov 24 00:40:27.940509 systemd[1]: Initializing machine ID from random generator. Nov 24 00:40:27.940516 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:40:27.940524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:40:27.940531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:40:27.940542 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:40:27.940549 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:40:27.940557 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:40:27.940565 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:40:27.940574 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:40:27.940583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:40:27.940591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:40:27.940600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:40:27.940608 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:40:27.940616 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:40:27.940623 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:40:27.940631 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:40:27.940653 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:40:27.940663 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:40:27.940672 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:40:27.940680 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:40:27.940689 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:40:27.940697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:40:27.940707 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:40:27.940715 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:40:27.940722 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:40:27.940732 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:40:27.940740 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:40:27.940748 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:40:27.940756 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:40:27.940764 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:40:27.940771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:40:27.940784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:40:27.940890 systemd-journald[187]: Collecting audit messages is disabled. Nov 24 00:40:27.940966 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:40:27.941006 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:40:27.941035 systemd-journald[187]: Journal started Nov 24 00:40:27.941106 systemd-journald[187]: Runtime Journal (/run/log/journal/71df0f4c683f4fb5832964510504a514) is 8M, max 78.2M, 70.2M free. Nov 24 00:40:27.918307 systemd-modules-load[188]: Inserted module 'overlay' Nov 24 00:40:27.950763 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:40:27.950784 kernel: Bridge firewalling registered Nov 24 00:40:27.943801 systemd-modules-load[188]: Inserted module 'br_netfilter' Nov 24 00:40:27.955556 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:40:27.978133 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:40:28.062745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:40:28.064272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:40:28.068499 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:40:28.072963 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:40:28.078784 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:40:28.082756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:40:28.091804 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:40:28.102934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:40:28.107407 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:40:28.108839 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:40:28.111231 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:40:28.114796 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:40:28.119973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:40:28.124821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:40:28.138462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:40:28.141042 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:40:28.169790 systemd-resolved[224]: Positive Trust Anchors: Nov 24 00:40:28.169806 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:40:28.169834 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:40:28.172949 systemd-resolved[224]: Defaulting to hostname 'linux'. Nov 24 00:40:28.173987 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:40:28.176455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:40:28.245674 kernel: SCSI subsystem initialized Nov 24 00:40:28.254709 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:40:28.265670 kernel: iscsi: registered transport (tcp) Nov 24 00:40:28.285860 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:40:28.285887 kernel: QLogic iSCSI HBA Driver Nov 24 00:40:28.307794 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:40:28.324210 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:40:28.327413 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:40:28.383615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:40:28.385826 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:40:28.443667 kernel: raid6: avx2x4 gen() 33118 MB/s Nov 24 00:40:28.461662 kernel: raid6: avx2x2 gen() 32622 MB/s Nov 24 00:40:28.479751 kernel: raid6: avx2x1 gen() 23206 MB/s Nov 24 00:40:28.479778 kernel: raid6: using algorithm avx2x4 gen() 33118 MB/s Nov 24 00:40:28.499925 kernel: raid6: .... xor() 5180 MB/s, rmw enabled Nov 24 00:40:28.499950 kernel: raid6: using avx2x2 recovery algorithm Nov 24 00:40:28.521734 kernel: xor: automatically using best checksumming function avx Nov 24 00:40:28.660673 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:40:28.669203 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:40:28.671675 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:40:28.697554 systemd-udevd[435]: Using default interface naming scheme 'v255'. Nov 24 00:40:28.703916 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:40:28.707075 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:40:28.736713 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Nov 24 00:40:28.767711 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:40:28.770423 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:40:28.848509 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:40:28.852969 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:40:28.925674 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 24 00:40:28.934202 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:40:28.934217 kernel: libata version 3.00 loaded. Nov 24 00:40:28.936656 kernel: ahci 0000:00:1f.2: version 3.0 Nov 24 00:40:28.939716 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 24 00:40:28.948977 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 24 00:40:28.949238 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 24 00:40:28.949388 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 24 00:40:28.949531 kernel: AES CTR mode by8 optimization enabled Nov 24 00:40:28.955115 kernel: scsi host0: Virtio SCSI HBA Nov 24 00:40:28.955292 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 24 00:40:28.994351 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:40:29.124123 kernel: scsi host1: ahci Nov 24 00:40:29.124344 kernel: scsi host2: ahci Nov 24 00:40:29.124503 kernel: scsi host3: ahci Nov 24 00:40:29.126757 kernel: scsi host4: ahci Nov 24 00:40:29.126923 kernel: scsi host5: ahci Nov 24 00:40:29.127077 kernel: scsi host6: ahci Nov 24 00:40:29.127225 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 1 Nov 24 00:40:29.127237 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 1 Nov 24 00:40:29.113680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:40:29.177055 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 1 Nov 24 00:40:29.177081 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 1 Nov 24 00:40:29.177092 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 1 Nov 24 00:40:29.177103 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 1 Nov 24 00:40:29.176593 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:40:29.179749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:40:29.180906 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:40:29.294778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:40:29.486881 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 24 00:40:29.486946 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 24 00:40:29.486958 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 24 00:40:29.488879 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 24 00:40:29.493863 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 24 00:40:29.493906 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 24 00:40:29.510745 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 00:40:29.524247 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 24 00:40:29.524490 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 24 00:40:29.551940 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 24 00:40:29.552134 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 24 00:40:29.552289 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 24 00:40:29.566836 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 00:40:29.566904 kernel: GPT:9289727 != 167739391 Nov 24 00:40:29.570418 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 00:40:29.570436 kernel: GPT:9289727 != 167739391 Nov 24 00:40:29.574086 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 00:40:29.574117 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 24 00:40:29.578859 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 24 00:40:29.634236 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 24 00:40:29.644559 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 24 00:40:29.652066 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:40:29.662491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 24 00:40:29.670788 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 24 00:40:29.671585 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 24 00:40:29.674753 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:40:29.675764 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:40:29.677577 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:40:29.680304 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:40:29.684780 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:40:29.696983 disk-uuid[613]: Primary Header is updated. Nov 24 00:40:29.696983 disk-uuid[613]: Secondary Entries is updated. Nov 24 00:40:29.696983 disk-uuid[613]: Secondary Header is updated. Nov 24 00:40:29.706655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 24 00:40:29.706910 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:40:30.726739 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 24 00:40:30.727151 disk-uuid[618]: The operation has completed successfully. Nov 24 00:40:30.780463 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:40:30.781533 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:40:30.806668 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:40:30.821909 sh[635]: Success Nov 24 00:40:30.840931 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:40:30.840964 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:40:30.843006 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:40:30.855734 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 00:40:30.895382 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:40:30.899773 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:40:30.910577 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:40:30.924700 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (647) Nov 24 00:40:30.924729 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 00:40:30.927948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:40:30.941475 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 24 00:40:30.941501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:40:30.941519 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:40:30.945365 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:40:30.947169 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:40:30.949087 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:40:30.950741 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:40:30.953779 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:40:30.990658 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (679) Nov 24 00:40:30.996307 kernel: BTRFS info (device sda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:40:30.996335 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:40:31.007825 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 24 00:40:31.007850 kernel: BTRFS info (device sda6): turning on async discard Nov 24 00:40:31.007863 kernel: BTRFS info (device sda6): enabling free space tree Nov 24 00:40:31.017879 kernel: BTRFS info (device sda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:40:31.019567 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:40:31.022467 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:40:31.124907 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:40:31.132545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:40:31.144217 ignition[740]: Ignition 2.22.0 Nov 24 00:40:31.145305 ignition[740]: Stage: fetch-offline Nov 24 00:40:31.145341 ignition[740]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:40:31.145354 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:40:31.145429 ignition[740]: parsed url from cmdline: "" Nov 24 00:40:31.145433 ignition[740]: no config URL provided Nov 24 00:40:31.145438 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:40:31.148909 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:40:31.145446 ignition[740]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:40:31.145452 ignition[740]: failed to fetch config: resource requires networking Nov 24 00:40:31.145676 ignition[740]: Ignition finished successfully Nov 24 00:40:31.175036 systemd-networkd[821]: lo: Link UP Nov 24 00:40:31.175049 systemd-networkd[821]: lo: Gained carrier Nov 24 00:40:31.176658 systemd-networkd[821]: Enumeration completed Nov 24 00:40:31.177038 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:40:31.177043 systemd-networkd[821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:40:31.177710 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:40:31.180101 systemd-networkd[821]: eth0: Link UP Nov 24 00:40:31.180264 systemd-networkd[821]: eth0: Gained carrier Nov 24 00:40:31.180274 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:40:31.181504 systemd[1]: Reached target network.target - Network. Nov 24 00:40:31.183741 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 00:40:31.213174 ignition[825]: Ignition 2.22.0 Nov 24 00:40:31.213188 ignition[825]: Stage: fetch Nov 24 00:40:31.213296 ignition[825]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:40:31.213307 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:40:31.213372 ignition[825]: parsed url from cmdline: "" Nov 24 00:40:31.213376 ignition[825]: no config URL provided Nov 24 00:40:31.213381 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:40:31.213389 ignition[825]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:40:31.213412 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 24 00:40:31.213546 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 24 00:40:31.414285 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 24 00:40:31.414451 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 24 00:40:31.814845 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 24 00:40:31.815037 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 24 00:40:31.916692 systemd-networkd[821]: eth0: DHCPv4 address 172.238.170.212/24, gateway 172.238.170.1 acquired from 23.33.177.56 Nov 24 00:40:32.321716 systemd-networkd[821]: eth0: Gained IPv6LL Nov 24 00:40:32.615102 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 24 00:40:32.699633 ignition[825]: PUT result: OK Nov 24 00:40:32.700603 ignition[825]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 24 00:40:32.814182 ignition[825]: GET result: OK Nov 24 00:40:32.814286 ignition[825]: parsing config with SHA512: 289a1d46eb511c5914d909aec9fb38fb5e2541fdcb48b520e4b2671914de7050d1ac26047b40cc3546017a51f3e65255104d601b94fa86059264a195e2c9aeae Nov 24 00:40:32.822717 unknown[825]: fetched base config from "system" Nov 24 00:40:32.823831 ignition[825]: fetch: fetch complete Nov 24 00:40:32.822728 unknown[825]: fetched base config from "system" Nov 24 00:40:32.823837 ignition[825]: fetch: fetch passed Nov 24 00:40:32.822735 unknown[825]: fetched user config from "akamai" Nov 24 00:40:32.823880 ignition[825]: Ignition finished successfully Nov 24 00:40:32.828019 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 00:40:32.840956 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:40:32.873874 ignition[832]: Ignition 2.22.0 Nov 24 00:40:32.873889 ignition[832]: Stage: kargs Nov 24 00:40:32.874002 ignition[832]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:40:32.876900 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:40:32.874013 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:40:32.874678 ignition[832]: kargs: kargs passed Nov 24 00:40:32.874719 ignition[832]: Ignition finished successfully Nov 24 00:40:32.880797 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:40:32.911203 ignition[838]: Ignition 2.22.0 Nov 24 00:40:32.911218 ignition[838]: Stage: disks Nov 24 00:40:32.911345 ignition[838]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:40:32.911356 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:40:32.912263 ignition[838]: disks: disks passed Nov 24 00:40:32.914121 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:40:32.912301 ignition[838]: Ignition finished successfully Nov 24 00:40:32.915851 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:40:32.916965 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:40:32.918425 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:40:32.919797 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:40:32.921341 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:40:32.923750 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:40:32.968162 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 00:40:32.972058 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:40:32.975307 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:40:33.087665 kernel: EXT4-fs (sda9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 00:40:33.087326 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:40:33.088402 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:40:33.090571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:40:33.093703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:40:33.096191 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 00:40:33.097293 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:40:33.097317 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:40:33.104905 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:40:33.106858 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:40:33.116099 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Nov 24 00:40:33.116126 kernel: BTRFS info (device sda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:40:33.121993 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:40:33.128046 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 24 00:40:33.128082 kernel: BTRFS info (device sda6): turning on async discard Nov 24 00:40:33.131829 kernel: BTRFS info (device sda6): enabling free space tree Nov 24 00:40:33.134212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:40:33.166263 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:40:33.171611 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:40:33.176994 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:40:33.182105 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:40:33.271696 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:40:33.273380 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:40:33.276516 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:40:33.291132 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:40:33.296890 kernel: BTRFS info (device sda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:40:33.308694 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:40:33.322705 ignition[968]: INFO : Ignition 2.22.0 Nov 24 00:40:33.322705 ignition[968]: INFO : Stage: mount Nov 24 00:40:33.322705 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:40:33.322705 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:40:33.328351 ignition[968]: INFO : mount: mount passed Nov 24 00:40:33.328351 ignition[968]: INFO : Ignition finished successfully Nov 24 00:40:33.327709 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:40:33.329962 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:40:34.089240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:40:34.113686 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Nov 24 00:40:34.113751 kernel: BTRFS info (device sda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:40:34.118584 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:40:34.126248 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 24 00:40:34.126271 kernel: BTRFS info (device sda6): turning on async discard Nov 24 00:40:34.126284 kernel: BTRFS info (device sda6): enabling free space tree Nov 24 00:40:34.131096 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:40:34.162956 ignition[996]: INFO : Ignition 2.22.0 Nov 24 00:40:34.162956 ignition[996]: INFO : Stage: files Nov 24 00:40:34.164887 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:40:34.164887 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:40:34.164887 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:40:34.167727 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:40:34.167727 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:40:34.169906 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:40:34.169906 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:40:34.169906 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:40:34.168901 unknown[996]: wrote ssh authorized keys file for user: core Nov 24 00:40:34.173967 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:40:34.173967 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 00:40:34.462993 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:40:34.516979 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 00:40:34.518359 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 24 00:40:34.518359 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 24 00:40:34.759204 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 24 00:40:34.959847 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 24 00:40:34.961025 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:40:34.961025 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:40:34.961025 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:40:34.961025 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:40:34.961025 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:40:34.961025 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:40:34.961025 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:40:34.961025 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:40:34.992904 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:40:34.992904 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:40:34.992904 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:40:34.992904 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:40:34.992904 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:40:34.992904 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 24 00:40:35.279929 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 24 00:40:35.897018 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 24 00:40:35.898527 ignition[996]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 24 00:40:35.898527 ignition[996]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:40:35.900796 ignition[996]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:40:35.900796 ignition[996]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 24 00:40:35.900796 ignition[996]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 24 00:40:35.900796 ignition[996]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 24 00:40:35.900796 ignition[996]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 24 00:40:35.900796 ignition[996]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 24 00:40:35.900796 ignition[996]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:40:35.900796 ignition[996]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:40:35.900796 ignition[996]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:40:35.900796 ignition[996]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:40:35.900796 ignition[996]: INFO : files: files passed Nov 24 00:40:35.900796 ignition[996]: INFO : Ignition finished successfully Nov 24 00:40:35.904053 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:40:35.908783 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:40:35.911428 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:40:35.922687 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:40:35.923934 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:40:35.930159 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:40:35.930159 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:40:35.933536 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:40:35.934863 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:40:35.936029 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:40:35.938088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:40:35.977759 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:40:35.977878 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:40:35.979517 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:40:35.980880 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:40:35.982467 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:40:35.983201 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:40:36.014162 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:40:36.016762 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:40:36.032670 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:40:36.034362 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:40:36.035191 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:40:36.035987 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:40:36.036082 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:40:36.038036 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:40:36.039043 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:40:36.040470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:40:36.042016 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:40:36.043464 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:40:36.044912 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:40:36.046487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:40:36.048113 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:40:36.049775 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:40:36.051341 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:40:36.052907 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:40:36.054393 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:40:36.054489 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:40:36.056358 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:40:36.057438 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:40:36.058796 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:40:36.058895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:40:36.060262 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:40:36.060392 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:40:36.062378 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:40:36.062483 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:40:36.063511 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:40:36.063661 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:40:36.066719 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:40:36.068070 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:40:36.068179 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:40:36.072184 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:40:36.114265 ignition[1050]: INFO : Ignition 2.22.0 Nov 24 00:40:36.114265 ignition[1050]: INFO : Stage: umount Nov 24 00:40:36.114265 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:40:36.114265 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 24 00:40:36.114265 ignition[1050]: INFO : umount: umount passed Nov 24 00:40:36.114265 ignition[1050]: INFO : Ignition finished successfully Nov 24 00:40:36.074494 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:40:36.074605 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:40:36.077509 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:40:36.077606 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:40:36.087045 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:40:36.087146 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:40:36.111209 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:40:36.111311 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:40:36.113996 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:40:36.116240 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:40:36.116324 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:40:36.118536 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:40:36.118588 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:40:36.120150 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 00:40:36.120200 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 00:40:36.121493 systemd[1]: Stopped target network.target - Network. Nov 24 00:40:36.122837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:40:36.122889 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:40:36.124399 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:40:36.125896 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:40:36.129864 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:40:36.130737 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:40:36.132099 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:40:36.133524 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:40:36.133568 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:40:36.134932 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:40:36.134974 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:40:36.136316 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:40:36.136368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:40:36.137750 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:40:36.137796 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:40:36.139355 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:40:36.140716 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:40:36.142345 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:40:36.142458 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:40:36.144975 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:40:36.145045 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:40:36.148184 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:40:36.148306 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:40:36.152938 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:40:36.153203 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:40:36.153332 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:40:36.155317 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:40:36.156548 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:40:36.157868 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:40:36.157913 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:40:36.160019 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:40:36.162073 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:40:36.162126 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:40:36.164200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:40:36.164251 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:40:36.166586 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:40:36.166633 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:40:36.167605 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:40:36.167672 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:40:36.169380 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:40:36.174135 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:40:36.174197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:40:36.186387 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:40:36.186516 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:40:36.188384 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:40:36.188911 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:40:36.190266 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:40:36.190311 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:40:36.191602 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:40:36.191655 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:40:36.193124 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:40:36.193173 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:40:36.195368 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:40:36.195420 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:40:36.196967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:40:36.197013 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:40:36.199475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:40:36.201111 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:40:36.201165 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:40:36.203883 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:40:36.203936 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:40:36.206161 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:40:36.206209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:40:36.209063 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:40:36.209120 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:40:36.209165 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:40:36.217012 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:40:36.217219 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:40:36.219132 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:40:36.221113 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:40:36.250739 systemd[1]: Switching root. Nov 24 00:40:36.288851 systemd-journald[187]: Journal stopped Nov 24 00:40:37.460215 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 24 00:40:37.460242 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:40:37.460254 kernel: SELinux: policy capability open_perms=1 Nov 24 00:40:37.460264 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:40:37.460272 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:40:37.460283 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:40:37.460293 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:40:37.460302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:40:37.460311 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:40:37.460320 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:40:37.460329 kernel: audit: type=1403 audit(1763944836.465:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:40:37.460339 systemd[1]: Successfully loaded SELinux policy in 71.225ms. Nov 24 00:40:37.460352 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.535ms. Nov 24 00:40:37.460364 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:40:37.460374 systemd[1]: Detected virtualization kvm. Nov 24 00:40:37.460384 systemd[1]: Detected architecture x86-64. Nov 24 00:40:37.460398 systemd[1]: Detected first boot. Nov 24 00:40:37.460408 systemd[1]: Initializing machine ID from random generator. Nov 24 00:40:37.460418 zram_generator::config[1093]: No configuration found. Nov 24 00:40:37.460428 kernel: Guest personality initialized and is inactive Nov 24 00:40:37.460438 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 00:40:37.460447 kernel: Initialized host personality Nov 24 00:40:37.460456 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:40:37.460466 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:40:37.460479 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:40:37.460489 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:40:37.460499 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:40:37.460509 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:40:37.460519 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:40:37.460529 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:40:37.460539 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:40:37.460551 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:40:37.460561 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:40:37.460571 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:40:37.460582 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:40:37.460592 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:40:37.460602 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:40:37.460613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:40:37.460623 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:40:37.460653 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:40:37.460669 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:40:37.460680 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:40:37.460690 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:40:37.460701 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:40:37.460711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:40:37.460721 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:40:37.460734 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:40:37.460744 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:40:37.460754 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:40:37.460765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:40:37.460775 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:40:37.460785 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:40:37.460795 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:40:37.460806 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:40:37.460816 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:40:37.460828 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:40:37.460839 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:40:37.460849 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:40:37.460861 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:40:37.460874 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:40:37.460884 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:40:37.460895 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:40:37.460905 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:40:37.460915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:40:37.460926 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:40:37.460936 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:40:37.460946 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:40:37.460959 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:40:37.460970 systemd[1]: Reached target machines.target - Containers. Nov 24 00:40:37.460980 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:40:37.460990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:40:37.461001 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:40:37.461011 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:40:37.461021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:40:37.461032 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:40:37.461042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:40:37.461055 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:40:37.461065 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:40:37.461075 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:40:37.461088 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:40:37.461098 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:40:37.461109 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:40:37.461119 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:40:37.461130 kernel: fuse: init (API version 7.41) Nov 24 00:40:37.461142 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:40:37.461153 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:40:37.461163 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:40:37.461173 kernel: loop: module loaded Nov 24 00:40:37.461183 kernel: ACPI: bus type drm_connector registered Nov 24 00:40:37.461193 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:40:37.461203 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:40:37.461214 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:40:37.461247 systemd-journald[1184]: Collecting audit messages is disabled. Nov 24 00:40:37.461269 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:40:37.461280 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:40:37.461291 systemd-journald[1184]: Journal started Nov 24 00:40:37.461312 systemd-journald[1184]: Runtime Journal (/run/log/journal/30b3cd36af4849a0826b2c087cc1d020) is 8M, max 78.2M, 70.2M free. Nov 24 00:40:37.095128 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:40:37.108351 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 24 00:40:37.108913 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:40:37.465741 systemd[1]: Stopped verity-setup.service. Nov 24 00:40:37.472661 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:40:37.477884 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:40:37.479360 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:40:37.480446 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:40:37.481506 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:40:37.482388 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:40:37.483271 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:40:37.484253 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:40:37.485287 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:40:37.486595 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:40:37.488086 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:40:37.488346 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:40:37.489483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:40:37.490157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:40:37.491232 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:40:37.491483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:40:37.492834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:40:37.493086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:40:37.494328 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:40:37.494627 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:40:37.495825 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:40:37.496263 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:40:37.497569 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:40:37.498804 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:40:37.500183 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:40:37.501346 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:40:37.516519 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:40:37.520761 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:40:37.523711 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:40:37.525704 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:40:37.525733 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:40:37.527529 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:40:37.538752 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:40:37.540790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:40:37.544757 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:40:37.548182 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:40:37.549492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:40:37.551870 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:40:37.553758 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:40:37.555909 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:40:37.560551 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:40:37.567443 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:40:37.570978 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:40:37.572138 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:40:37.583699 systemd-journald[1184]: Time spent on flushing to /var/log/journal/30b3cd36af4849a0826b2c087cc1d020 is 27.718ms for 1010 entries. Nov 24 00:40:37.583699 systemd-journald[1184]: System Journal (/var/log/journal/30b3cd36af4849a0826b2c087cc1d020) is 8M, max 195.6M, 187.6M free. Nov 24 00:40:37.625023 systemd-journald[1184]: Received client request to flush runtime journal. Nov 24 00:40:37.595571 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:40:37.597447 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:40:37.604870 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:40:37.634666 kernel: loop0: detected capacity change from 0 to 128560 Nov 24 00:40:37.629706 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:40:37.661660 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:40:37.664406 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:40:37.672241 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:40:37.680409 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:40:37.688263 kernel: loop1: detected capacity change from 0 to 8 Nov 24 00:40:37.698116 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:40:37.702904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:40:37.711682 kernel: loop2: detected capacity change from 0 to 110984 Nov 24 00:40:37.740908 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Nov 24 00:40:37.741194 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Nov 24 00:40:37.745996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:40:37.748372 kernel: loop3: detected capacity change from 0 to 229808 Nov 24 00:40:37.800662 kernel: loop4: detected capacity change from 0 to 128560 Nov 24 00:40:37.821797 kernel: loop5: detected capacity change from 0 to 8 Nov 24 00:40:37.828665 kernel: loop6: detected capacity change from 0 to 110984 Nov 24 00:40:37.846668 kernel: loop7: detected capacity change from 0 to 229808 Nov 24 00:40:37.867509 (sd-merge)[1241]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Nov 24 00:40:37.868320 (sd-merge)[1241]: Merged extensions into '/usr'. Nov 24 00:40:37.875419 systemd[1]: Reload requested from client PID 1218 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:40:37.875434 systemd[1]: Reloading... Nov 24 00:40:37.987731 zram_generator::config[1267]: No configuration found. Nov 24 00:40:38.115594 ldconfig[1213]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:40:38.208946 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:40:38.209450 systemd[1]: Reloading finished in 333 ms. Nov 24 00:40:38.247098 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:40:38.248325 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:40:38.249423 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:40:38.257868 systemd[1]: Starting ensure-sysext.service... Nov 24 00:40:38.261927 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:40:38.264357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:40:38.279717 systemd[1]: Reload requested from client PID 1311 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:40:38.279730 systemd[1]: Reloading... Nov 24 00:40:38.280017 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:40:38.280048 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:40:38.280339 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:40:38.280594 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:40:38.281480 systemd-tmpfiles[1312]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:40:38.281765 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Nov 24 00:40:38.281832 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Nov 24 00:40:38.286114 systemd-tmpfiles[1312]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:40:38.286130 systemd-tmpfiles[1312]: Skipping /boot Nov 24 00:40:38.298442 systemd-tmpfiles[1312]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:40:38.298457 systemd-tmpfiles[1312]: Skipping /boot Nov 24 00:40:38.334707 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Nov 24 00:40:38.348758 zram_generator::config[1339]: No configuration found. Nov 24 00:40:38.600670 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 00:40:38.610377 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:40:38.612238 systemd[1]: Reloading finished in 332 ms. Nov 24 00:40:38.626665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:40:38.629577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:40:38.637822 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 24 00:40:38.645064 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 00:40:38.664757 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:40:38.670020 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:40:38.675679 kernel: ACPI: button: Power Button [PWRF] Nov 24 00:40:38.675665 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:40:38.678843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:40:38.686754 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:40:38.695747 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:40:38.699517 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:40:38.706882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:40:38.708293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:40:38.714966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:40:38.718972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:40:38.721331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:40:38.723776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:40:38.723872 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:40:38.723950 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:40:38.727634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:40:38.727855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:40:38.728000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:40:38.728070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:40:38.728140 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:40:38.729957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:40:38.730520 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:40:38.732493 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:40:38.743952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:40:38.744168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:40:38.748014 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:40:38.753114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:40:38.754199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:40:38.754294 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:40:38.757919 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:40:38.758686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:40:38.769703 systemd[1]: Finished ensure-sysext.service. Nov 24 00:40:38.779292 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 00:40:38.803422 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:40:38.805684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:40:38.816103 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:40:38.818768 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:40:38.822653 kernel: EDAC MC: Ver: 3.0.0 Nov 24 00:40:38.824627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:40:38.825777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:40:38.832472 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:40:38.835833 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:40:38.857359 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:40:38.861860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:40:38.862982 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:40:38.864936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:40:38.865480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:40:38.868632 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:40:38.870703 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:40:38.878634 augenrules[1478]: No rules Nov 24 00:40:38.878589 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:40:38.879927 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:40:38.892772 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:40:38.940357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 24 00:40:38.946912 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:40:38.950077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:40:38.975067 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:40:38.996073 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:40:39.142806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:40:39.164426 systemd-networkd[1426]: lo: Link UP Nov 24 00:40:39.164873 systemd-networkd[1426]: lo: Gained carrier Nov 24 00:40:39.167060 systemd-networkd[1426]: Enumeration completed Nov 24 00:40:39.167192 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:40:39.169222 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:40:39.170006 systemd-networkd[1426]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:40:39.170624 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:40:39.171560 systemd-networkd[1426]: eth0: Link UP Nov 24 00:40:39.171955 systemd-networkd[1426]: eth0: Gained carrier Nov 24 00:40:39.171976 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:40:39.177903 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:40:39.182341 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 00:40:39.183403 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:40:39.187786 systemd-resolved[1431]: Positive Trust Anchors: Nov 24 00:40:39.187800 systemd-resolved[1431]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:40:39.187827 systemd-resolved[1431]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:40:39.191974 systemd-resolved[1431]: Defaulting to hostname 'linux'. Nov 24 00:40:39.193934 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:40:39.194878 systemd[1]: Reached target network.target - Network. Nov 24 00:40:39.195539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:40:39.196463 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:40:39.197431 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:40:39.198402 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:40:39.199461 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:40:39.200376 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:40:39.201442 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:40:39.202208 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:40:39.202972 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:40:39.203007 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:40:39.203689 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:40:39.205545 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:40:39.207933 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:40:39.211018 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:40:39.211934 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:40:39.212809 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:40:39.215303 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:40:39.216510 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:40:39.218261 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:40:39.219215 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:40:39.243160 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:40:39.243876 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:40:39.244617 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:40:39.244675 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:40:39.246146 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:40:39.250004 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 00:40:39.256849 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:40:39.258848 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:40:39.261811 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:40:39.268894 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:40:39.269615 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:40:39.271114 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:40:39.287835 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:40:39.292147 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:40:39.296966 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:40:39.305493 jq[1513]: false Nov 24 00:40:39.312101 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:40:39.321183 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing passwd entry cache Nov 24 00:40:39.321465 oslogin_cache_refresh[1515]: Refreshing passwd entry cache Nov 24 00:40:39.326670 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting users, quitting Nov 24 00:40:39.326670 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:40:39.326670 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing group entry cache Nov 24 00:40:39.324154 oslogin_cache_refresh[1515]: Failure getting users, quitting Nov 24 00:40:39.324169 oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:40:39.324204 oslogin_cache_refresh[1515]: Refreshing group entry cache Nov 24 00:40:39.327871 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting groups, quitting Nov 24 00:40:39.327871 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:40:39.327270 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:40:39.327048 oslogin_cache_refresh[1515]: Failure getting groups, quitting Nov 24 00:40:39.327059 oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:40:39.330477 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:40:39.331088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:40:39.332025 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:40:39.339795 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:40:39.347236 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:40:39.350904 extend-filesystems[1514]: Found /dev/sda6 Nov 24 00:40:39.351183 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:40:39.351476 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:40:39.351909 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:40:39.352188 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:40:39.355009 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:40:39.355253 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:40:39.358147 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:40:39.358383 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:40:39.373028 extend-filesystems[1514]: Found /dev/sda9 Nov 24 00:40:39.385864 extend-filesystems[1514]: Checking size of /dev/sda9 Nov 24 00:40:39.403417 tar[1539]: linux-amd64/LICENSE Nov 24 00:40:39.404541 tar[1539]: linux-amd64/helm Nov 24 00:40:39.415771 extend-filesystems[1514]: Resized partition /dev/sda9 Nov 24 00:40:39.416954 update_engine[1533]: I20251124 00:40:39.416602 1533 main.cc:92] Flatcar Update Engine starting Nov 24 00:40:39.417150 jq[1535]: true Nov 24 00:40:39.426807 extend-filesystems[1559]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 00:40:39.430844 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:40:39.438199 coreos-metadata[1510]: Nov 24 00:40:39.437 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 24 00:40:39.441933 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Nov 24 00:40:39.446563 dbus-daemon[1511]: [system] SELinux support is enabled Nov 24 00:40:39.446778 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:40:39.451989 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:40:39.452021 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:40:39.453050 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:40:39.453067 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:40:39.472155 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:40:39.474906 update_engine[1533]: I20251124 00:40:39.473035 1533 update_check_scheduler.cc:74] Next update check in 6m23s Nov 24 00:40:39.484865 jq[1558]: true Nov 24 00:40:39.485259 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:40:39.551515 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 00:40:39.551551 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:40:39.553158 systemd-logind[1528]: New seat seat0. Nov 24 00:40:39.558418 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:40:39.633756 containerd[1552]: time="2025-11-24T00:40:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:40:39.636252 containerd[1552]: time="2025-11-24T00:40:39.635412580Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:40:39.636294 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:40:39.639307 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:40:39.648100 systemd[1]: Starting sshkeys.service... Nov 24 00:40:39.659420 containerd[1552]: time="2025-11-24T00:40:39.659345170Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="38.57µs" Nov 24 00:40:39.659467 containerd[1552]: time="2025-11-24T00:40:39.659418210Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:40:39.659467 containerd[1552]: time="2025-11-24T00:40:39.659442730Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:40:39.660208 containerd[1552]: time="2025-11-24T00:40:39.659927980Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:40:39.660208 containerd[1552]: time="2025-11-24T00:40:39.659991690Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:40:39.660208 containerd[1552]: time="2025-11-24T00:40:39.660085030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.660288150Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.660312330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.660840820Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.660858650Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.660912050Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.660925810Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.661167280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.661726910Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.661807540Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.661858860Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:40:39.663893 containerd[1552]: time="2025-11-24T00:40:39.661945460Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:40:39.665308 containerd[1552]: time="2025-11-24T00:40:39.662755000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:40:39.665308 containerd[1552]: time="2025-11-24T00:40:39.662836550Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674691310Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674734530Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674761040Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674778560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674789040Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674798000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674808380Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674829620Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674838460Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674847000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674855230Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674865100Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674963690Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:40:39.675841 containerd[1552]: time="2025-11-24T00:40:39.674981020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.674992890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675003420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675011560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675020970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675029980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675038720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675047570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675055880Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675064630Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675105130Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675121950Z" level=info msg="Start snapshots syncer" Nov 24 00:40:39.676087 containerd[1552]: time="2025-11-24T00:40:39.675154090Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:40:39.676328 containerd[1552]: time="2025-11-24T00:40:39.675354200Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:40:39.676328 containerd[1552]: time="2025-11-24T00:40:39.675394240Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:40:39.677789 containerd[1552]: time="2025-11-24T00:40:39.677332550Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.677841500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.677871210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.677926680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.677948050Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678002860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678014020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678023170Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678172910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678233080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678243720Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678311600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678548240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:40:39.678856 containerd[1552]: time="2025-11-24T00:40:39.678562470Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.678572040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.678623500Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.678682970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.678710610Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.678779620Z" level=info msg="runtime interface created" Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.678789260Z" level=info msg="created NRI interface" Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.678802550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.678939170Z" level=info msg="Connect containerd service" Nov 24 00:40:39.679084 containerd[1552]: time="2025-11-24T00:40:39.679007960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:40:39.689327 containerd[1552]: time="2025-11-24T00:40:39.684813920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:40:39.709732 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 24 00:40:39.718943 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 24 00:40:39.739688 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Nov 24 00:40:39.756367 extend-filesystems[1559]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 24 00:40:39.756367 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 24 00:40:39.756367 extend-filesystems[1559]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Nov 24 00:40:39.793694 extend-filesystems[1514]: Resized filesystem in /dev/sda9 Nov 24 00:40:39.757606 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:40:39.759167 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.817830540Z" level=info msg="Start subscribing containerd event" Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.817876570Z" level=info msg="Start recovering state" Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.817982980Z" level=info msg="Start event monitor" Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.817995730Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.818003730Z" level=info msg="Start streaming server" Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.818011310Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.818018580Z" level=info msg="runtime interface starting up..." Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.818024030Z" level=info msg="starting plugins..." Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.818035840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.818703130Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.818861630Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:40:39.822656 containerd[1552]: time="2025-11-24T00:40:39.819874660Z" level=info msg="containerd successfully booted in 0.187429s" Nov 24 00:40:39.819819 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:40:39.875987 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:40:39.903700 coreos-metadata[1592]: Nov 24 00:40:39.903 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 24 00:40:39.929702 systemd-networkd[1426]: eth0: DHCPv4 address 172.238.170.212/24, gateway 172.238.170.1 acquired from 23.33.177.56 Nov 24 00:40:39.929914 dbus-daemon[1511]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1426 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 24 00:40:39.931342 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Nov 24 00:40:39.935226 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 24 00:40:40.041523 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 24 00:40:40.043576 dbus-daemon[1511]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 24 00:40:40.044881 dbus-daemon[1511]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1612 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 24 00:40:40.050998 systemd[1]: Starting polkit.service - Authorization Manager... Nov 24 00:40:40.926010 systemd-resolved[1431]: Clock change detected. Flushing caches. Nov 24 00:40:40.926338 systemd-timesyncd[1454]: Contacted time server 66.244.16.123:123 (0.flatcar.pool.ntp.org). Nov 24 00:40:40.926384 systemd-timesyncd[1454]: Initial clock synchronization to Mon 2025-11-24 00:40:40.925967 UTC. Nov 24 00:40:40.987275 tar[1539]: linux-amd64/README.md Nov 24 00:40:41.005333 polkitd[1613]: Started polkitd version 126 Nov 24 00:40:41.010724 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:40:41.013787 polkitd[1613]: Loading rules from directory /etc/polkit-1/rules.d Nov 24 00:40:41.014159 polkitd[1613]: Loading rules from directory /run/polkit-1/rules.d Nov 24 00:40:41.014251 polkitd[1613]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:40:41.014489 polkitd[1613]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 24 00:40:41.014550 polkitd[1613]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:40:41.014635 polkitd[1613]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 24 00:40:41.015264 polkitd[1613]: Finished loading, compiling and executing 2 rules Nov 24 00:40:41.015536 systemd[1]: Started polkit.service - Authorization Manager. Nov 24 00:40:41.018030 dbus-daemon[1511]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 24 00:40:41.018486 polkitd[1613]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 24 00:40:41.027783 systemd-resolved[1431]: System hostname changed to '172-238-170-212'. Nov 24 00:40:41.028843 systemd-hostnamed[1612]: Hostname set to <172-238-170-212> (transient) Nov 24 00:40:41.140395 sshd_keygen[1536]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:40:41.162849 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:40:41.166493 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:40:41.185834 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:40:41.186090 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:40:41.189210 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:40:41.207896 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:40:41.211008 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:40:41.213948 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:40:41.215380 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:40:41.253187 coreos-metadata[1510]: Nov 24 00:40:41.253 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 24 00:40:41.341539 coreos-metadata[1510]: Nov 24 00:40:41.341 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 24 00:40:41.383870 systemd-networkd[1426]: eth0: Gained IPv6LL Nov 24 00:40:41.386824 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:40:41.388314 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:40:41.392425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:40:41.396110 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:40:41.424428 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:40:41.530215 coreos-metadata[1510]: Nov 24 00:40:41.529 INFO Fetch successful Nov 24 00:40:41.530215 coreos-metadata[1510]: Nov 24 00:40:41.530 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 24 00:40:41.720547 coreos-metadata[1592]: Nov 24 00:40:41.720 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 24 00:40:41.809642 coreos-metadata[1592]: Nov 24 00:40:41.809 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 24 00:40:41.815745 coreos-metadata[1510]: Nov 24 00:40:41.815 INFO Fetch successful Nov 24 00:40:41.938700 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 00:40:41.940486 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:40:41.946991 coreos-metadata[1592]: Nov 24 00:40:41.946 INFO Fetch successful Nov 24 00:40:41.966497 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:40:41.966961 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 24 00:40:41.970039 systemd[1]: Finished sshkeys.service. Nov 24 00:40:42.331205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:40:42.332546 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:40:42.334047 systemd[1]: Startup finished in 2.902s (kernel) + 8.800s (initrd) + 5.132s (userspace) = 16.835s. Nov 24 00:40:42.340410 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:40:42.886453 kubelet[1685]: E1124 00:40:42.886384 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:40:42.890165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:40:42.890355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:40:42.890744 systemd[1]: kubelet.service: Consumed 916ms CPU time, 266.9M memory peak. Nov 24 00:40:43.201870 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:40:43.202956 systemd[1]: Started sshd@0-172.238.170.212:22-147.75.109.163:42236.service - OpenSSH per-connection server daemon (147.75.109.163:42236). Nov 24 00:40:43.557898 sshd[1697]: Accepted publickey for core from 147.75.109.163 port 42236 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:40:43.559847 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:40:43.566811 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:40:43.568061 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:40:43.576567 systemd-logind[1528]: New session 1 of user core. Nov 24 00:40:43.591470 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:40:43.594878 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:40:43.604137 (systemd)[1702]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:40:43.606576 systemd-logind[1528]: New session c1 of user core. Nov 24 00:40:43.734936 systemd[1702]: Queued start job for default target default.target. Nov 24 00:40:43.745917 systemd[1702]: Created slice app.slice - User Application Slice. Nov 24 00:40:43.745944 systemd[1702]: Reached target paths.target - Paths. Nov 24 00:40:43.745987 systemd[1702]: Reached target timers.target - Timers. Nov 24 00:40:43.747585 systemd[1702]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:40:43.759175 systemd[1702]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:40:43.759225 systemd[1702]: Reached target sockets.target - Sockets. Nov 24 00:40:43.759263 systemd[1702]: Reached target basic.target - Basic System. Nov 24 00:40:43.759307 systemd[1702]: Reached target default.target - Main User Target. Nov 24 00:40:43.759342 systemd[1702]: Startup finished in 146ms. Nov 24 00:40:43.759502 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:40:43.769809 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:40:44.020554 systemd[1]: Started sshd@1-172.238.170.212:22-147.75.109.163:42252.service - OpenSSH per-connection server daemon (147.75.109.163:42252). Nov 24 00:40:44.357021 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 42252 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:40:44.358457 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:40:44.365092 systemd-logind[1528]: New session 2 of user core. Nov 24 00:40:44.371002 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:40:44.593614 sshd[1716]: Connection closed by 147.75.109.163 port 42252 Nov 24 00:40:44.594062 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Nov 24 00:40:44.597305 systemd[1]: sshd@1-172.238.170.212:22-147.75.109.163:42252.service: Deactivated successfully. Nov 24 00:40:44.599031 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 00:40:44.600386 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Nov 24 00:40:44.601580 systemd-logind[1528]: Removed session 2. Nov 24 00:40:44.650861 systemd[1]: Started sshd@2-172.238.170.212:22-147.75.109.163:42254.service - OpenSSH per-connection server daemon (147.75.109.163:42254). Nov 24 00:40:44.973273 sshd[1722]: Accepted publickey for core from 147.75.109.163 port 42254 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:40:44.975090 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:40:44.981118 systemd-logind[1528]: New session 3 of user core. Nov 24 00:40:44.986818 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:40:45.208923 sshd[1725]: Connection closed by 147.75.109.163 port 42254 Nov 24 00:40:45.209466 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Nov 24 00:40:45.213625 systemd[1]: sshd@2-172.238.170.212:22-147.75.109.163:42254.service: Deactivated successfully. Nov 24 00:40:45.215745 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 00:40:45.217087 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Nov 24 00:40:45.219027 systemd-logind[1528]: Removed session 3. Nov 24 00:40:45.274003 systemd[1]: Started sshd@3-172.238.170.212:22-147.75.109.163:42258.service - OpenSSH per-connection server daemon (147.75.109.163:42258). Nov 24 00:40:45.605555 sshd[1731]: Accepted publickey for core from 147.75.109.163 port 42258 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:40:45.607319 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:40:45.612672 systemd-logind[1528]: New session 4 of user core. Nov 24 00:40:45.621786 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:40:45.850106 sshd[1734]: Connection closed by 147.75.109.163 port 42258 Nov 24 00:40:45.850728 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Nov 24 00:40:45.854522 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:40:45.855396 systemd[1]: sshd@3-172.238.170.212:22-147.75.109.163:42258.service: Deactivated successfully. Nov 24 00:40:45.857434 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:40:45.859793 systemd-logind[1528]: Removed session 4. Nov 24 00:40:45.915166 systemd[1]: Started sshd@4-172.238.170.212:22-147.75.109.163:42274.service - OpenSSH per-connection server daemon (147.75.109.163:42274). Nov 24 00:40:46.255982 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 42274 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:40:46.257476 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:40:46.262806 systemd-logind[1528]: New session 5 of user core. Nov 24 00:40:46.269793 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:40:46.461901 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:40:46.462217 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:40:46.479735 sudo[1744]: pam_unix(sudo:session): session closed for user root Nov 24 00:40:46.533150 sshd[1743]: Connection closed by 147.75.109.163 port 42274 Nov 24 00:40:46.534839 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Nov 24 00:40:46.539344 systemd[1]: sshd@4-172.238.170.212:22-147.75.109.163:42274.service: Deactivated successfully. Nov 24 00:40:46.541011 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:40:46.541805 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:40:46.543119 systemd-logind[1528]: Removed session 5. Nov 24 00:40:46.594589 systemd[1]: Started sshd@5-172.238.170.212:22-147.75.109.163:42282.service - OpenSSH per-connection server daemon (147.75.109.163:42282). Nov 24 00:40:46.928919 sshd[1750]: Accepted publickey for core from 147.75.109.163 port 42282 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:40:46.930657 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:40:46.936118 systemd-logind[1528]: New session 6 of user core. Nov 24 00:40:46.941995 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:40:47.124506 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:40:47.124897 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:40:47.129359 sudo[1755]: pam_unix(sudo:session): session closed for user root Nov 24 00:40:47.134947 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:40:47.135253 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:40:47.144655 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:40:47.187660 augenrules[1777]: No rules Nov 24 00:40:47.188275 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:40:47.188533 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:40:47.190457 sudo[1754]: pam_unix(sudo:session): session closed for user root Nov 24 00:40:47.240700 sshd[1753]: Connection closed by 147.75.109.163 port 42282 Nov 24 00:40:47.241196 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Nov 24 00:40:47.245433 systemd[1]: sshd@5-172.238.170.212:22-147.75.109.163:42282.service: Deactivated successfully. Nov 24 00:40:47.247353 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:40:47.248341 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:40:47.249640 systemd-logind[1528]: Removed session 6. Nov 24 00:40:47.300144 systemd[1]: Started sshd@6-172.238.170.212:22-147.75.109.163:42296.service - OpenSSH per-connection server daemon (147.75.109.163:42296). Nov 24 00:40:47.627153 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 42296 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:40:47.628524 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:40:47.632931 systemd-logind[1528]: New session 7 of user core. Nov 24 00:40:47.642791 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:40:47.818221 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:40:47.818545 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:40:48.124119 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:40:48.135276 (dockerd)[1807]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:40:48.356936 dockerd[1807]: time="2025-11-24T00:40:48.356870719Z" level=info msg="Starting up" Nov 24 00:40:48.357990 dockerd[1807]: time="2025-11-24T00:40:48.357956259Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:40:48.372213 dockerd[1807]: time="2025-11-24T00:40:48.372171269Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:40:48.396388 systemd[1]: var-lib-docker-metacopy\x2dcheck163889022-merged.mount: Deactivated successfully. Nov 24 00:40:48.420730 dockerd[1807]: time="2025-11-24T00:40:48.420664509Z" level=info msg="Loading containers: start." Nov 24 00:40:48.432871 kernel: Initializing XFRM netlink socket Nov 24 00:40:48.713565 systemd-networkd[1426]: docker0: Link UP Nov 24 00:40:48.718325 dockerd[1807]: time="2025-11-24T00:40:48.718285819Z" level=info msg="Loading containers: done." Nov 24 00:40:48.731164 dockerd[1807]: time="2025-11-24T00:40:48.731066419Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:40:48.731164 dockerd[1807]: time="2025-11-24T00:40:48.731138629Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:40:48.731314 dockerd[1807]: time="2025-11-24T00:40:48.731209579Z" level=info msg="Initializing buildkit" Nov 24 00:40:48.733304 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4172897937-merged.mount: Deactivated successfully. Nov 24 00:40:48.752572 dockerd[1807]: time="2025-11-24T00:40:48.752544379Z" level=info msg="Completed buildkit initialization" Nov 24 00:40:48.759862 dockerd[1807]: time="2025-11-24T00:40:48.759843249Z" level=info msg="Daemon has completed initialization" Nov 24 00:40:48.759977 dockerd[1807]: time="2025-11-24T00:40:48.759948479Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:40:48.760076 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:40:49.419194 containerd[1552]: time="2025-11-24T00:40:49.419156779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 24 00:40:50.218214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697212657.mount: Deactivated successfully. Nov 24 00:40:51.348167 containerd[1552]: time="2025-11-24T00:40:51.348110079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:51.349513 containerd[1552]: time="2025-11-24T00:40:51.349467499Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=30113213" Nov 24 00:40:51.350126 containerd[1552]: time="2025-11-24T00:40:51.350058939Z" level=info msg="ImageCreate event name:\"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:51.355785 containerd[1552]: time="2025-11-24T00:40:51.355591599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:51.358219 containerd[1552]: time="2025-11-24T00:40:51.358170049Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"30109812\" in 1.93897586s" Nov 24 00:40:51.358274 containerd[1552]: time="2025-11-24T00:40:51.358220249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:74cc54db7bbcced6056c8430786ff02557adfb2ad9e548fa2ae02ff4a3b42c73\"" Nov 24 00:40:51.359646 containerd[1552]: time="2025-11-24T00:40:51.359413129Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 24 00:40:52.794294 containerd[1552]: time="2025-11-24T00:40:52.794235949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:52.795575 containerd[1552]: time="2025-11-24T00:40:52.795547039Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=26018107" Nov 24 00:40:52.797416 containerd[1552]: time="2025-11-24T00:40:52.796059149Z" level=info msg="ImageCreate event name:\"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:52.798255 containerd[1552]: time="2025-11-24T00:40:52.798223909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:52.799198 containerd[1552]: time="2025-11-24T00:40:52.799170699Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"27675143\" in 1.43972492s" Nov 24 00:40:52.799251 containerd[1552]: time="2025-11-24T00:40:52.799199869Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:9290eb63dc141c2f8d019c41484908f600f19daccfbc45c0a856b067ca47b0af\"" Nov 24 00:40:52.800059 containerd[1552]: time="2025-11-24T00:40:52.800036199Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 24 00:40:53.141010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:40:53.143696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:40:53.329583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:40:53.348091 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:40:53.399707 kubelet[2087]: E1124 00:40:53.398490 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:40:53.405238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:40:53.405450 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:40:53.405924 systemd[1]: kubelet.service: Consumed 213ms CPU time, 109.1M memory peak. Nov 24 00:40:54.070355 containerd[1552]: time="2025-11-24T00:40:54.070319529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:54.071441 containerd[1552]: time="2025-11-24T00:40:54.071381309Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=20156482" Nov 24 00:40:54.071741 containerd[1552]: time="2025-11-24T00:40:54.071717009Z" level=info msg="ImageCreate event name:\"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:54.074350 containerd[1552]: time="2025-11-24T00:40:54.074311299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:54.075229 containerd[1552]: time="2025-11-24T00:40:54.075206709Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"21813536\" in 1.27514338s" Nov 24 00:40:54.075310 containerd[1552]: time="2025-11-24T00:40:54.075296439Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:6109fc16b0291b0728bc133620fe1906c51d999917dd3add0744a906c0fb7eef\"" Nov 24 00:40:54.075831 containerd[1552]: time="2025-11-24T00:40:54.075671119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 24 00:40:55.214218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630861232.mount: Deactivated successfully. Nov 24 00:40:55.603350 containerd[1552]: time="2025-11-24T00:40:55.602701909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:55.603350 containerd[1552]: time="2025-11-24T00:40:55.603255879Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=31929138" Nov 24 00:40:55.603938 containerd[1552]: time="2025-11-24T00:40:55.603895419Z" level=info msg="ImageCreate event name:\"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:55.605279 containerd[1552]: time="2025-11-24T00:40:55.605257279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:55.605798 containerd[1552]: time="2025-11-24T00:40:55.605767969Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"31928157\" in 1.52970833s" Nov 24 00:40:55.605834 containerd[1552]: time="2025-11-24T00:40:55.605798909Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:87c5a2e6c1d1ea6f96a0b5d43f96c5066e8ff78c9c6adb335631fc9c90cb0a19\"" Nov 24 00:40:55.606452 containerd[1552]: time="2025-11-24T00:40:55.606431139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 24 00:40:56.228274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525333388.mount: Deactivated successfully. Nov 24 00:40:57.001048 containerd[1552]: time="2025-11-24T00:40:57.000993379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:57.002164 containerd[1552]: time="2025-11-24T00:40:57.002136359Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 24 00:40:57.002787 containerd[1552]: time="2025-11-24T00:40:57.002756639Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:57.005328 containerd[1552]: time="2025-11-24T00:40:57.004916129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:57.005851 containerd[1552]: time="2025-11-24T00:40:57.005818849Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.39936123s" Nov 24 00:40:57.005899 containerd[1552]: time="2025-11-24T00:40:57.005850929Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 24 00:40:57.006753 containerd[1552]: time="2025-11-24T00:40:57.006720469Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:40:57.598978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586034113.mount: Deactivated successfully. Nov 24 00:40:57.602701 containerd[1552]: time="2025-11-24T00:40:57.602627939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:40:57.603513 containerd[1552]: time="2025-11-24T00:40:57.603455809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 00:40:57.605362 containerd[1552]: time="2025-11-24T00:40:57.604149129Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:40:57.605900 containerd[1552]: time="2025-11-24T00:40:57.605869119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:40:57.606587 containerd[1552]: time="2025-11-24T00:40:57.606557609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 599.80956ms" Nov 24 00:40:57.606671 containerd[1552]: time="2025-11-24T00:40:57.606654009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:40:57.607476 containerd[1552]: time="2025-11-24T00:40:57.607449959Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 24 00:40:58.277146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374739778.mount: Deactivated successfully. Nov 24 00:40:59.955961 containerd[1552]: time="2025-11-24T00:40:59.955866299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:59.958691 containerd[1552]: time="2025-11-24T00:40:59.958629509Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:59.958838 containerd[1552]: time="2025-11-24T00:40:59.958811069Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Nov 24 00:40:59.963358 containerd[1552]: time="2025-11-24T00:40:59.963308139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:40:59.965226 containerd[1552]: time="2025-11-24T00:40:59.965189399Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.35770587s" Nov 24 00:40:59.965226 containerd[1552]: time="2025-11-24T00:40:59.965222979Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 24 00:41:02.834377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:41:02.834517 systemd[1]: kubelet.service: Consumed 213ms CPU time, 109.1M memory peak. Nov 24 00:41:02.837244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:41:02.866920 systemd[1]: Reload requested from client PID 2245 ('systemctl') (unit session-7.scope)... Nov 24 00:41:02.866941 systemd[1]: Reloading... Nov 24 00:41:03.030707 zram_generator::config[2301]: No configuration found. Nov 24 00:41:03.238566 systemd[1]: Reloading finished in 371 ms. Nov 24 00:41:03.308552 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:41:03.308783 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:41:03.309409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:41:03.309461 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.3M memory peak. Nov 24 00:41:03.311541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:41:03.505212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:41:03.517034 (kubelet)[2343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:41:03.576715 kubelet[2343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:41:03.579704 kubelet[2343]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:41:03.579704 kubelet[2343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:41:03.579704 kubelet[2343]: I1124 00:41:03.577582 2343 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:41:03.999767 kubelet[2343]: I1124 00:41:03.999281 2343 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:41:03.999767 kubelet[2343]: I1124 00:41:03.999305 2343 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:41:03.999767 kubelet[2343]: I1124 00:41:03.999522 2343 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:41:04.044713 kubelet[2343]: I1124 00:41:04.044564 2343 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:41:04.045737 kubelet[2343]: E1124 00:41:04.045668 2343 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.170.212:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.170.212:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 00:41:04.051670 kubelet[2343]: I1124 00:41:04.051650 2343 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:41:04.056015 kubelet[2343]: I1124 00:41:04.055985 2343 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:41:04.056294 kubelet[2343]: I1124 00:41:04.056266 2343 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:41:04.056416 kubelet[2343]: I1124 00:41:04.056289 2343 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-170-212","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:41:04.056517 kubelet[2343]: I1124 00:41:04.056421 2343 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:41:04.056517 kubelet[2343]: I1124 00:41:04.056430 2343 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:41:04.056564 kubelet[2343]: I1124 00:41:04.056540 2343 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:41:04.059589 kubelet[2343]: I1124 00:41:04.059465 2343 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:41:04.059589 kubelet[2343]: I1124 00:41:04.059486 2343 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:41:04.059589 kubelet[2343]: I1124 00:41:04.059510 2343 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:41:04.060972 kubelet[2343]: I1124 00:41:04.060959 2343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:41:04.065691 kubelet[2343]: E1124 00:41:04.065446 2343 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.170.212:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-170-212&limit=500&resourceVersion=0\": dial tcp 172.238.170.212:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 00:41:04.065842 kubelet[2343]: E1124 00:41:04.065821 2343 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.170.212:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.170.212:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 00:41:04.066040 kubelet[2343]: I1124 00:41:04.066021 2343 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:41:04.066445 kubelet[2343]: I1124 00:41:04.066428 2343 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:41:04.067367 kubelet[2343]: W1124 00:41:04.067347 2343 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:41:04.080801 kubelet[2343]: I1124 00:41:04.080786 2343 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:41:04.082355 kubelet[2343]: I1124 00:41:04.082167 2343 server.go:1289] "Started kubelet" Nov 24 00:41:04.083233 kubelet[2343]: I1124 00:41:04.082987 2343 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:41:04.084191 kubelet[2343]: I1124 00:41:04.084177 2343 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:41:04.087163 kubelet[2343]: I1124 00:41:04.087123 2343 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:41:04.087636 kubelet[2343]: I1124 00:41:04.087615 2343 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:41:04.089973 kubelet[2343]: E1124 00:41:04.087718 2343 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.170.212:6443/api/v1/namespaces/default/events\": dial tcp 172.238.170.212:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-170-212.187aca8212fd9f3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-170-212,UID:172-238-170-212,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-170-212,},FirstTimestamp:2025-11-24 00:41:04.082140989 +0000 UTC m=+0.558826991,LastTimestamp:2025-11-24 00:41:04.082140989 +0000 UTC m=+0.558826991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-170-212,}" Nov 24 00:41:04.090904 kubelet[2343]: I1124 00:41:04.090889 2343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:41:04.092169 kubelet[2343]: I1124 00:41:04.092142 2343 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:41:04.094406 kubelet[2343]: E1124 00:41:04.094378 2343 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-170-212\" not found" Nov 24 00:41:04.095203 kubelet[2343]: I1124 00:41:04.094585 2343 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:41:04.095416 kubelet[2343]: E1124 00:41:04.094645 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.170.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-170-212?timeout=10s\": dial tcp 172.238.170.212:6443: connect: connection refused" interval="200ms" Nov 24 00:41:04.095416 kubelet[2343]: E1124 00:41:04.095105 2343 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.170.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.170.212:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 00:41:04.095416 kubelet[2343]: I1124 00:41:04.095234 2343 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:41:04.095502 kubelet[2343]: I1124 00:41:04.095455 2343 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:41:04.095567 kubelet[2343]: I1124 00:41:04.095400 2343 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:41:04.096521 kubelet[2343]: E1124 00:41:04.096506 2343 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:41:04.096925 kubelet[2343]: I1124 00:41:04.096913 2343 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:41:04.097061 kubelet[2343]: I1124 00:41:04.096983 2343 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:41:04.117349 kubelet[2343]: I1124 00:41:04.117335 2343 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:41:04.117349 kubelet[2343]: I1124 00:41:04.117346 2343 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:41:04.117439 kubelet[2343]: I1124 00:41:04.117359 2343 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:41:04.120394 kubelet[2343]: I1124 00:41:04.120253 2343 policy_none.go:49] "None policy: Start" Nov 24 00:41:04.120394 kubelet[2343]: I1124 00:41:04.120269 2343 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:41:04.120394 kubelet[2343]: I1124 00:41:04.120279 2343 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:41:04.127389 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:41:04.135917 kubelet[2343]: I1124 00:41:04.135896 2343 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:41:04.138212 kubelet[2343]: I1124 00:41:04.137947 2343 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:41:04.138212 kubelet[2343]: I1124 00:41:04.137966 2343 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:41:04.138212 kubelet[2343]: I1124 00:41:04.137998 2343 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:41:04.138212 kubelet[2343]: I1124 00:41:04.138208 2343 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:41:04.138305 kubelet[2343]: E1124 00:41:04.138245 2343 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:41:04.139317 kubelet[2343]: E1124 00:41:04.139281 2343 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.170.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.170.212:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 00:41:04.142604 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:41:04.147500 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:41:04.156643 kubelet[2343]: E1124 00:41:04.156629 2343 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:41:04.156877 kubelet[2343]: I1124 00:41:04.156864 2343 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:41:04.156945 kubelet[2343]: I1124 00:41:04.156924 2343 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:41:04.157122 kubelet[2343]: I1124 00:41:04.157103 2343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:41:04.158114 kubelet[2343]: E1124 00:41:04.158100 2343 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:41:04.158212 kubelet[2343]: E1124 00:41:04.158201 2343 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-170-212\" not found" Nov 24 00:41:04.249773 systemd[1]: Created slice kubepods-burstable-pod4ffed519a34b57324fcd3ae585160d59.slice - libcontainer container kubepods-burstable-pod4ffed519a34b57324fcd3ae585160d59.slice. Nov 24 00:41:04.257608 kubelet[2343]: E1124 00:41:04.257589 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:04.258996 kubelet[2343]: I1124 00:41:04.258969 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-170-212" Nov 24 00:41:04.259183 kubelet[2343]: E1124 00:41:04.259166 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.170.212:6443/api/v1/nodes\": dial tcp 172.238.170.212:6443: connect: connection refused" node="172-238-170-212" Nov 24 00:41:04.262238 systemd[1]: Created slice kubepods-burstable-pod126d484d12db26ea1abce1c0ea24f873.slice - libcontainer container kubepods-burstable-pod126d484d12db26ea1abce1c0ea24f873.slice. Nov 24 00:41:04.271001 kubelet[2343]: E1124 00:41:04.270985 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:04.273782 systemd[1]: Created slice kubepods-burstable-pod83a1ec113e55052a3f9c1af63a05eac7.slice - libcontainer container kubepods-burstable-pod83a1ec113e55052a3f9c1af63a05eac7.slice. Nov 24 00:41:04.275566 kubelet[2343]: E1124 00:41:04.275538 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:04.295913 kubelet[2343]: E1124 00:41:04.295882 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.170.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-170-212?timeout=10s\": dial tcp 172.238.170.212:6443: connect: connection refused" interval="400ms" Nov 24 00:41:04.296921 kubelet[2343]: I1124 00:41:04.296903 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ffed519a34b57324fcd3ae585160d59-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-170-212\" (UID: \"4ffed519a34b57324fcd3ae585160d59\") " pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:04.296921 kubelet[2343]: I1124 00:41:04.296928 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-kubeconfig\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:04.297008 kubelet[2343]: I1124 00:41:04.296946 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:04.297008 kubelet[2343]: I1124 00:41:04.296961 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83a1ec113e55052a3f9c1af63a05eac7-kubeconfig\") pod \"kube-scheduler-172-238-170-212\" (UID: \"83a1ec113e55052a3f9c1af63a05eac7\") " pod="kube-system/kube-scheduler-172-238-170-212" Nov 24 00:41:04.297008 kubelet[2343]: I1124 00:41:04.296974 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ffed519a34b57324fcd3ae585160d59-k8s-certs\") pod \"kube-apiserver-172-238-170-212\" (UID: \"4ffed519a34b57324fcd3ae585160d59\") " pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:04.297008 kubelet[2343]: I1124 00:41:04.296987 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-ca-certs\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:04.297008 kubelet[2343]: I1124 00:41:04.297001 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-flexvolume-dir\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:04.297155 kubelet[2343]: I1124 00:41:04.297014 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-k8s-certs\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:04.297155 kubelet[2343]: I1124 00:41:04.297028 2343 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ffed519a34b57324fcd3ae585160d59-ca-certs\") pod \"kube-apiserver-172-238-170-212\" (UID: \"4ffed519a34b57324fcd3ae585160d59\") " pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:04.461391 kubelet[2343]: I1124 00:41:04.461357 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-170-212" Nov 24 00:41:04.461574 kubelet[2343]: E1124 00:41:04.461544 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.170.212:6443/api/v1/nodes\": dial tcp 172.238.170.212:6443: connect: connection refused" node="172-238-170-212" Nov 24 00:41:04.559166 kubelet[2343]: E1124 00:41:04.559067 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:04.560211 containerd[1552]: time="2025-11-24T00:41:04.559841969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-170-212,Uid:4ffed519a34b57324fcd3ae585160d59,Namespace:kube-system,Attempt:0,}" Nov 24 00:41:04.571695 kubelet[2343]: E1124 00:41:04.571444 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:04.575145 containerd[1552]: time="2025-11-24T00:41:04.575120449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-170-212,Uid:126d484d12db26ea1abce1c0ea24f873,Namespace:kube-system,Attempt:0,}" Nov 24 00:41:04.576292 kubelet[2343]: E1124 00:41:04.576010 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:04.579669 containerd[1552]: time="2025-11-24T00:41:04.579648099Z" level=info msg="connecting to shim 473a429f8d8ac7d473ec06248d147f75d1b5425b6a0ef8dcd8b7d5f138949139" address="unix:///run/containerd/s/a54fd0e21a93747ae44a0a003941ed5324f777aef3937ac2f6bbb97db2019884" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:41:04.580471 containerd[1552]: time="2025-11-24T00:41:04.580452309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-170-212,Uid:83a1ec113e55052a3f9c1af63a05eac7,Namespace:kube-system,Attempt:0,}" Nov 24 00:41:04.614465 containerd[1552]: time="2025-11-24T00:41:04.614425079Z" level=info msg="connecting to shim 9b3cb9bc1685f965d8d9597374f230a431d7ba03f7ba3abe8055d1201ef98c3c" address="unix:///run/containerd/s/f50557a1c00f049943dc2166c9ba82b4163abf4bffbcc25e10309045618e0621" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:41:04.615425 containerd[1552]: time="2025-11-24T00:41:04.615391639Z" level=info msg="connecting to shim 51a71d4e755e2d73520535865a8ff5a7b8a02e2c80e8d7d2edf9fe3870f2688c" address="unix:///run/containerd/s/c6e0d8a22e52d105895dc81229aa207bd99792cc55cdc4432773bfd57e058c68" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:41:04.623981 systemd[1]: Started cri-containerd-473a429f8d8ac7d473ec06248d147f75d1b5425b6a0ef8dcd8b7d5f138949139.scope - libcontainer container 473a429f8d8ac7d473ec06248d147f75d1b5425b6a0ef8dcd8b7d5f138949139. Nov 24 00:41:04.659795 systemd[1]: Started cri-containerd-51a71d4e755e2d73520535865a8ff5a7b8a02e2c80e8d7d2edf9fe3870f2688c.scope - libcontainer container 51a71d4e755e2d73520535865a8ff5a7b8a02e2c80e8d7d2edf9fe3870f2688c. Nov 24 00:41:04.665633 systemd[1]: Started cri-containerd-9b3cb9bc1685f965d8d9597374f230a431d7ba03f7ba3abe8055d1201ef98c3c.scope - libcontainer container 9b3cb9bc1685f965d8d9597374f230a431d7ba03f7ba3abe8055d1201ef98c3c. Nov 24 00:41:04.697081 kubelet[2343]: E1124 00:41:04.697042 2343 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.170.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-170-212?timeout=10s\": dial tcp 172.238.170.212:6443: connect: connection refused" interval="800ms" Nov 24 00:41:04.698637 containerd[1552]: time="2025-11-24T00:41:04.698120609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-170-212,Uid:4ffed519a34b57324fcd3ae585160d59,Namespace:kube-system,Attempt:0,} returns sandbox id \"473a429f8d8ac7d473ec06248d147f75d1b5425b6a0ef8dcd8b7d5f138949139\"" Nov 24 00:41:04.700218 kubelet[2343]: E1124 00:41:04.700098 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:04.704520 containerd[1552]: time="2025-11-24T00:41:04.704486699Z" level=info msg="CreateContainer within sandbox \"473a429f8d8ac7d473ec06248d147f75d1b5425b6a0ef8dcd8b7d5f138949139\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:41:04.710784 containerd[1552]: time="2025-11-24T00:41:04.710759959Z" level=info msg="Container 2fc94ab1e9eb1c5e3a1dcdc72855976c2b76dca1a7176f9e70ee841a3ec0d5d1: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:04.715887 containerd[1552]: time="2025-11-24T00:41:04.715863609Z" level=info msg="CreateContainer within sandbox \"473a429f8d8ac7d473ec06248d147f75d1b5425b6a0ef8dcd8b7d5f138949139\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2fc94ab1e9eb1c5e3a1dcdc72855976c2b76dca1a7176f9e70ee841a3ec0d5d1\"" Nov 24 00:41:04.716488 containerd[1552]: time="2025-11-24T00:41:04.716465359Z" level=info msg="StartContainer for \"2fc94ab1e9eb1c5e3a1dcdc72855976c2b76dca1a7176f9e70ee841a3ec0d5d1\"" Nov 24 00:41:04.717566 containerd[1552]: time="2025-11-24T00:41:04.717542789Z" level=info msg="connecting to shim 2fc94ab1e9eb1c5e3a1dcdc72855976c2b76dca1a7176f9e70ee841a3ec0d5d1" address="unix:///run/containerd/s/a54fd0e21a93747ae44a0a003941ed5324f777aef3937ac2f6bbb97db2019884" protocol=ttrpc version=3 Nov 24 00:41:04.738868 systemd[1]: Started cri-containerd-2fc94ab1e9eb1c5e3a1dcdc72855976c2b76dca1a7176f9e70ee841a3ec0d5d1.scope - libcontainer container 2fc94ab1e9eb1c5e3a1dcdc72855976c2b76dca1a7176f9e70ee841a3ec0d5d1. Nov 24 00:41:04.767772 containerd[1552]: time="2025-11-24T00:41:04.767740519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-170-212,Uid:126d484d12db26ea1abce1c0ea24f873,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b3cb9bc1685f965d8d9597374f230a431d7ba03f7ba3abe8055d1201ef98c3c\"" Nov 24 00:41:04.768859 kubelet[2343]: E1124 00:41:04.768818 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:04.773863 containerd[1552]: time="2025-11-24T00:41:04.773841739Z" level=info msg="CreateContainer within sandbox \"9b3cb9bc1685f965d8d9597374f230a431d7ba03f7ba3abe8055d1201ef98c3c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:41:04.783295 containerd[1552]: time="2025-11-24T00:41:04.783265909Z" level=info msg="Container 584c99c54e5dc2f6d71fc7a7af4dc7c77b097f54b5c0025733ceebde293030a6: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:04.786750 containerd[1552]: time="2025-11-24T00:41:04.786708129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-170-212,Uid:83a1ec113e55052a3f9c1af63a05eac7,Namespace:kube-system,Attempt:0,} returns sandbox id \"51a71d4e755e2d73520535865a8ff5a7b8a02e2c80e8d7d2edf9fe3870f2688c\"" Nov 24 00:41:04.788098 kubelet[2343]: E1124 00:41:04.787395 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:04.790643 containerd[1552]: time="2025-11-24T00:41:04.790622679Z" level=info msg="CreateContainer within sandbox \"9b3cb9bc1685f965d8d9597374f230a431d7ba03f7ba3abe8055d1201ef98c3c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"584c99c54e5dc2f6d71fc7a7af4dc7c77b097f54b5c0025733ceebde293030a6\"" Nov 24 00:41:04.791294 containerd[1552]: time="2025-11-24T00:41:04.791277259Z" level=info msg="StartContainer for \"584c99c54e5dc2f6d71fc7a7af4dc7c77b097f54b5c0025733ceebde293030a6\"" Nov 24 00:41:04.791527 containerd[1552]: time="2025-11-24T00:41:04.791361489Z" level=info msg="CreateContainer within sandbox \"51a71d4e755e2d73520535865a8ff5a7b8a02e2c80e8d7d2edf9fe3870f2688c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:41:04.793119 containerd[1552]: time="2025-11-24T00:41:04.793077219Z" level=info msg="connecting to shim 584c99c54e5dc2f6d71fc7a7af4dc7c77b097f54b5c0025733ceebde293030a6" address="unix:///run/containerd/s/f50557a1c00f049943dc2166c9ba82b4163abf4bffbcc25e10309045618e0621" protocol=ttrpc version=3 Nov 24 00:41:04.802867 containerd[1552]: time="2025-11-24T00:41:04.802662189Z" level=info msg="Container 36ed50553a49122642a6ee63d5ed9532dbc018bb1f2b902c7bb83712aad5e6de: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:04.811519 containerd[1552]: time="2025-11-24T00:41:04.811440509Z" level=info msg="CreateContainer within sandbox \"51a71d4e755e2d73520535865a8ff5a7b8a02e2c80e8d7d2edf9fe3870f2688c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36ed50553a49122642a6ee63d5ed9532dbc018bb1f2b902c7bb83712aad5e6de\"" Nov 24 00:41:04.812407 containerd[1552]: time="2025-11-24T00:41:04.812390229Z" level=info msg="StartContainer for \"36ed50553a49122642a6ee63d5ed9532dbc018bb1f2b902c7bb83712aad5e6de\"" Nov 24 00:41:04.818617 containerd[1552]: time="2025-11-24T00:41:04.818587229Z" level=info msg="connecting to shim 36ed50553a49122642a6ee63d5ed9532dbc018bb1f2b902c7bb83712aad5e6de" address="unix:///run/containerd/s/c6e0d8a22e52d105895dc81229aa207bd99792cc55cdc4432773bfd57e058c68" protocol=ttrpc version=3 Nov 24 00:41:04.823920 systemd[1]: Started cri-containerd-584c99c54e5dc2f6d71fc7a7af4dc7c77b097f54b5c0025733ceebde293030a6.scope - libcontainer container 584c99c54e5dc2f6d71fc7a7af4dc7c77b097f54b5c0025733ceebde293030a6. Nov 24 00:41:04.835719 containerd[1552]: time="2025-11-24T00:41:04.835697949Z" level=info msg="StartContainer for \"2fc94ab1e9eb1c5e3a1dcdc72855976c2b76dca1a7176f9e70ee841a3ec0d5d1\" returns successfully" Nov 24 00:41:04.865373 kubelet[2343]: I1124 00:41:04.865024 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-170-212" Nov 24 00:41:04.867216 kubelet[2343]: E1124 00:41:04.867195 2343 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.170.212:6443/api/v1/nodes\": dial tcp 172.238.170.212:6443: connect: connection refused" node="172-238-170-212" Nov 24 00:41:04.872799 systemd[1]: Started cri-containerd-36ed50553a49122642a6ee63d5ed9532dbc018bb1f2b902c7bb83712aad5e6de.scope - libcontainer container 36ed50553a49122642a6ee63d5ed9532dbc018bb1f2b902c7bb83712aad5e6de. Nov 24 00:41:04.915544 containerd[1552]: time="2025-11-24T00:41:04.915499339Z" level=info msg="StartContainer for \"584c99c54e5dc2f6d71fc7a7af4dc7c77b097f54b5c0025733ceebde293030a6\" returns successfully" Nov 24 00:41:05.003832 containerd[1552]: time="2025-11-24T00:41:05.003795949Z" level=info msg="StartContainer for \"36ed50553a49122642a6ee63d5ed9532dbc018bb1f2b902c7bb83712aad5e6de\" returns successfully" Nov 24 00:41:05.149256 kubelet[2343]: E1124 00:41:05.148746 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:05.149256 kubelet[2343]: E1124 00:41:05.148839 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:05.153376 kubelet[2343]: E1124 00:41:05.153304 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:05.153599 kubelet[2343]: E1124 00:41:05.153562 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:05.154454 kubelet[2343]: E1124 00:41:05.154413 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:05.154700 kubelet[2343]: E1124 00:41:05.154632 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:05.672444 kubelet[2343]: I1124 00:41:05.672411 2343 kubelet_node_status.go:75] "Attempting to register node" node="172-238-170-212" Nov 24 00:41:06.157600 kubelet[2343]: E1124 00:41:06.157440 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:06.159632 kubelet[2343]: E1124 00:41:06.159579 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:06.159865 kubelet[2343]: E1124 00:41:06.159850 2343 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:06.160000 kubelet[2343]: E1124 00:41:06.159986 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:06.341825 kubelet[2343]: E1124 00:41:06.341782 2343 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-238-170-212\" not found" node="172-238-170-212" Nov 24 00:41:06.379555 kubelet[2343]: I1124 00:41:06.379521 2343 kubelet_node_status.go:78] "Successfully registered node" node="172-238-170-212" Nov 24 00:41:06.396320 kubelet[2343]: I1124 00:41:06.396281 2343 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-170-212" Nov 24 00:41:06.404750 kubelet[2343]: E1124 00:41:06.404719 2343 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-170-212\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-170-212" Nov 24 00:41:06.404750 kubelet[2343]: I1124 00:41:06.404745 2343 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:06.406297 kubelet[2343]: E1124 00:41:06.406253 2343 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-170-212\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:06.406297 kubelet[2343]: I1124 00:41:06.406296 2343 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:06.408791 kubelet[2343]: E1124 00:41:06.408698 2343 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-170-212\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:07.066922 kubelet[2343]: I1124 00:41:07.066891 2343 apiserver.go:52] "Watching apiserver" Nov 24 00:41:07.095756 kubelet[2343]: I1124 00:41:07.095741 2343 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:41:07.157110 kubelet[2343]: I1124 00:41:07.156693 2343 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:07.158516 kubelet[2343]: E1124 00:41:07.158476 2343 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-170-212\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:07.158848 kubelet[2343]: E1124 00:41:07.158664 2343 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:08.466960 systemd[1]: Reload requested from client PID 2622 ('systemctl') (unit session-7.scope)... Nov 24 00:41:08.466979 systemd[1]: Reloading... Nov 24 00:41:08.563763 zram_generator::config[2666]: No configuration found. Nov 24 00:41:08.793791 systemd[1]: Reloading finished in 326 ms. Nov 24 00:41:08.828800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:41:08.851166 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:41:08.852705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:41:08.852773 systemd[1]: kubelet.service: Consumed 985ms CPU time, 131.4M memory peak. Nov 24 00:41:08.857870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:41:09.079890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:41:09.093067 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:41:09.132124 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:41:09.132124 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:41:09.132124 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:41:09.132505 kubelet[2717]: I1124 00:41:09.132146 2717 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:41:09.138441 kubelet[2717]: I1124 00:41:09.138405 2717 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 24 00:41:09.138441 kubelet[2717]: I1124 00:41:09.138427 2717 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:41:09.138619 kubelet[2717]: I1124 00:41:09.138588 2717 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 00:41:09.139639 kubelet[2717]: I1124 00:41:09.139610 2717 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 00:41:09.143737 kubelet[2717]: I1124 00:41:09.143008 2717 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:41:09.150409 kubelet[2717]: I1124 00:41:09.150380 2717 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:41:09.154724 kubelet[2717]: I1124 00:41:09.154360 2717 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:41:09.154887 kubelet[2717]: I1124 00:41:09.154761 2717 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:41:09.154985 kubelet[2717]: I1124 00:41:09.154791 2717 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-170-212","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:41:09.155065 kubelet[2717]: I1124 00:41:09.154992 2717 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:41:09.155065 kubelet[2717]: I1124 00:41:09.155005 2717 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 00:41:09.155065 kubelet[2717]: I1124 00:41:09.155067 2717 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:41:09.155925 kubelet[2717]: I1124 00:41:09.155278 2717 kubelet.go:480] "Attempting to sync node with API server" Nov 24 00:41:09.155925 kubelet[2717]: I1124 00:41:09.155303 2717 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:41:09.155925 kubelet[2717]: I1124 00:41:09.155331 2717 kubelet.go:386] "Adding apiserver pod source" Nov 24 00:41:09.155925 kubelet[2717]: I1124 00:41:09.155351 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:41:09.158025 kubelet[2717]: I1124 00:41:09.157975 2717 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:41:09.158998 kubelet[2717]: I1124 00:41:09.158594 2717 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 00:41:09.162269 kubelet[2717]: I1124 00:41:09.162255 2717 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:41:09.162384 kubelet[2717]: I1124 00:41:09.162369 2717 server.go:1289] "Started kubelet" Nov 24 00:41:09.165612 kubelet[2717]: I1124 00:41:09.165592 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:41:09.176079 kubelet[2717]: I1124 00:41:09.176051 2717 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:41:09.177708 kubelet[2717]: I1124 00:41:09.177230 2717 server.go:317] "Adding debug handlers to kubelet server" Nov 24 00:41:09.181752 kubelet[2717]: I1124 00:41:09.181713 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:41:09.181974 kubelet[2717]: I1124 00:41:09.181949 2717 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:41:09.182264 kubelet[2717]: I1124 00:41:09.182248 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:41:09.183828 kubelet[2717]: I1124 00:41:09.183815 2717 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:41:09.190167 kubelet[2717]: I1124 00:41:09.189612 2717 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:41:09.190501 kubelet[2717]: I1124 00:41:09.190361 2717 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:41:09.195097 kubelet[2717]: I1124 00:41:09.195075 2717 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 24 00:41:09.197400 kubelet[2717]: E1124 00:41:09.196378 2717 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:41:09.198406 kubelet[2717]: I1124 00:41:09.198388 2717 factory.go:223] Registration of the systemd container factory successfully Nov 24 00:41:09.198571 kubelet[2717]: I1124 00:41:09.198552 2717 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:41:09.200039 kubelet[2717]: I1124 00:41:09.199803 2717 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 24 00:41:09.200039 kubelet[2717]: I1124 00:41:09.199853 2717 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 24 00:41:09.200039 kubelet[2717]: I1124 00:41:09.199881 2717 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:41:09.200039 kubelet[2717]: I1124 00:41:09.199890 2717 kubelet.go:2436] "Starting kubelet main sync loop" Nov 24 00:41:09.200039 kubelet[2717]: E1124 00:41:09.199961 2717 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:41:09.208266 kubelet[2717]: I1124 00:41:09.208250 2717 factory.go:223] Registration of the containerd container factory successfully Nov 24 00:41:09.271247 kubelet[2717]: I1124 00:41:09.271171 2717 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:41:09.271247 kubelet[2717]: I1124 00:41:09.271221 2717 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:41:09.271247 kubelet[2717]: I1124 00:41:09.271265 2717 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:41:09.271460 kubelet[2717]: I1124 00:41:09.271433 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:41:09.271499 kubelet[2717]: I1124 00:41:09.271452 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:41:09.271499 kubelet[2717]: I1124 00:41:09.271478 2717 policy_none.go:49] "None policy: Start" Nov 24 00:41:09.271499 kubelet[2717]: I1124 00:41:09.271490 2717 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:41:09.271499 kubelet[2717]: I1124 00:41:09.271502 2717 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:41:09.271635 kubelet[2717]: I1124 00:41:09.271603 2717 state_mem.go:75] "Updated machine memory state" Nov 24 00:41:09.277698 kubelet[2717]: E1124 00:41:09.277570 2717 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 00:41:09.277855 kubelet[2717]: I1124 00:41:09.277826 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:41:09.277906 kubelet[2717]: I1124 00:41:09.277850 2717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:41:09.280036 kubelet[2717]: I1124 00:41:09.280003 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:41:09.286610 kubelet[2717]: E1124 00:41:09.286338 2717 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:41:09.301264 kubelet[2717]: I1124 00:41:09.301182 2717 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-170-212" Nov 24 00:41:09.301711 kubelet[2717]: I1124 00:41:09.301343 2717 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:09.302019 kubelet[2717]: I1124 00:41:09.301418 2717 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:09.386092 kubelet[2717]: I1124 00:41:09.385988 2717 kubelet_node_status.go:75] "Attempting to register node" node="172-238-170-212" Nov 24 00:41:09.392920 kubelet[2717]: I1124 00:41:09.392893 2717 kubelet_node_status.go:124] "Node was previously registered" node="172-238-170-212" Nov 24 00:41:09.392985 kubelet[2717]: I1124 00:41:09.392973 2717 kubelet_node_status.go:78] "Successfully registered node" node="172-238-170-212" Nov 24 00:41:09.469338 sudo[2754]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 24 00:41:09.469793 sudo[2754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 24 00:41:09.491985 kubelet[2717]: I1124 00:41:09.491654 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-ca-certs\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:09.492057 kubelet[2717]: I1124 00:41:09.492043 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ffed519a34b57324fcd3ae585160d59-ca-certs\") pod \"kube-apiserver-172-238-170-212\" (UID: \"4ffed519a34b57324fcd3ae585160d59\") " pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:09.492085 kubelet[2717]: I1124 00:41:09.492067 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-flexvolume-dir\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:09.492118 kubelet[2717]: I1124 00:41:09.492092 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-k8s-certs\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:09.492144 kubelet[2717]: I1124 00:41:09.492117 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-kubeconfig\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:09.492174 kubelet[2717]: I1124 00:41:09.492135 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/126d484d12db26ea1abce1c0ea24f873-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-170-212\" (UID: \"126d484d12db26ea1abce1c0ea24f873\") " pod="kube-system/kube-controller-manager-172-238-170-212" Nov 24 00:41:09.492174 kubelet[2717]: I1124 00:41:09.492165 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83a1ec113e55052a3f9c1af63a05eac7-kubeconfig\") pod \"kube-scheduler-172-238-170-212\" (UID: \"83a1ec113e55052a3f9c1af63a05eac7\") " pod="kube-system/kube-scheduler-172-238-170-212" Nov 24 00:41:09.492215 kubelet[2717]: I1124 00:41:09.492181 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ffed519a34b57324fcd3ae585160d59-k8s-certs\") pod \"kube-apiserver-172-238-170-212\" (UID: \"4ffed519a34b57324fcd3ae585160d59\") " pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:09.492215 kubelet[2717]: I1124 00:41:09.492202 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ffed519a34b57324fcd3ae585160d59-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-170-212\" (UID: \"4ffed519a34b57324fcd3ae585160d59\") " pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:09.609915 kubelet[2717]: E1124 00:41:09.609873 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:09.611919 kubelet[2717]: E1124 00:41:09.611884 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:09.612248 kubelet[2717]: E1124 00:41:09.612219 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:09.808869 sudo[2754]: pam_unix(sudo:session): session closed for user root Nov 24 00:41:10.163703 kubelet[2717]: I1124 00:41:10.163015 2717 apiserver.go:52] "Watching apiserver" Nov 24 00:41:10.190708 kubelet[2717]: I1124 00:41:10.190653 2717 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:41:10.241919 kubelet[2717]: E1124 00:41:10.240643 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:10.242500 kubelet[2717]: I1124 00:41:10.242480 2717 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-170-212" Nov 24 00:41:10.242730 kubelet[2717]: I1124 00:41:10.242547 2717 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:10.250977 kubelet[2717]: E1124 00:41:10.250952 2717 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-170-212\" already exists" pod="kube-system/kube-apiserver-172-238-170-212" Nov 24 00:41:10.251270 kubelet[2717]: E1124 00:41:10.251212 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:10.252982 kubelet[2717]: E1124 00:41:10.252940 2717 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-170-212\" already exists" pod="kube-system/kube-scheduler-172-238-170-212" Nov 24 00:41:10.253131 kubelet[2717]: E1124 00:41:10.253110 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:10.265768 kubelet[2717]: I1124 00:41:10.265505 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-170-212" podStartSLOduration=1.265492879 podStartE2EDuration="1.265492879s" podCreationTimestamp="2025-11-24 00:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:41:10.265137549 +0000 UTC m=+1.166469001" watchObservedRunningTime="2025-11-24 00:41:10.265492879 +0000 UTC m=+1.166824331" Nov 24 00:41:10.279819 kubelet[2717]: I1124 00:41:10.279774 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-170-212" podStartSLOduration=1.279759329 podStartE2EDuration="1.279759329s" podCreationTimestamp="2025-11-24 00:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:41:10.278632389 +0000 UTC m=+1.179963841" watchObservedRunningTime="2025-11-24 00:41:10.279759329 +0000 UTC m=+1.181090781" Nov 24 00:41:10.279926 kubelet[2717]: I1124 00:41:10.279853 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-170-212" podStartSLOduration=1.279849689 podStartE2EDuration="1.279849689s" podCreationTimestamp="2025-11-24 00:41:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:41:10.272462419 +0000 UTC m=+1.173793871" watchObservedRunningTime="2025-11-24 00:41:10.279849689 +0000 UTC m=+1.181181151" Nov 24 00:41:11.063009 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 24 00:41:11.092196 sudo[1790]: pam_unix(sudo:session): session closed for user root Nov 24 00:41:11.140437 sshd[1789]: Connection closed by 147.75.109.163 port 42296 Nov 24 00:41:11.140843 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Nov 24 00:41:11.145260 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:41:11.146079 systemd[1]: sshd@6-172.238.170.212:22-147.75.109.163:42296.service: Deactivated successfully. Nov 24 00:41:11.148538 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:41:11.148805 systemd[1]: session-7.scope: Consumed 4.538s CPU time, 273.6M memory peak. Nov 24 00:41:11.151464 systemd-logind[1528]: Removed session 7. Nov 24 00:41:11.242671 kubelet[2717]: E1124 00:41:11.242631 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:11.243056 kubelet[2717]: E1124 00:41:11.242992 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:12.244139 kubelet[2717]: E1124 00:41:12.243960 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:13.046779 kubelet[2717]: E1124 00:41:13.046731 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:13.570396 kubelet[2717]: I1124 00:41:13.570329 2717 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:41:13.572967 kubelet[2717]: I1124 00:41:13.571008 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:41:13.573051 containerd[1552]: time="2025-11-24T00:41:13.570737989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:41:14.053338 systemd[1]: Created slice kubepods-besteffort-pod7582fc7b_3e47_4385_8952_721f4ee24d23.slice - libcontainer container kubepods-besteffort-pod7582fc7b_3e47_4385_8952_721f4ee24d23.slice. Nov 24 00:41:14.079279 systemd[1]: Created slice kubepods-burstable-pod86fa2431_929e_4af1_bca3_4df3a5ec27d2.slice - libcontainer container kubepods-burstable-pod86fa2431_929e_4af1_bca3_4df3a5ec27d2.slice. Nov 24 00:41:14.083141 kubelet[2717]: E1124 00:41:14.081912 2717 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:172-238-170-212\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-170-212' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Nov 24 00:41:14.083141 kubelet[2717]: E1124 00:41:14.081989 2717 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:172-238-170-212\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-170-212' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Nov 24 00:41:14.083141 kubelet[2717]: E1124 00:41:14.082042 2717 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-238-170-212\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-170-212' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Nov 24 00:41:14.083141 kubelet[2717]: I1124 00:41:14.082080 2717 status_manager.go:895] "Failed to get status for pod" podUID="86fa2431-929e-4af1-bca3-4df3a5ec27d2" pod="kube-system/cilium-tmgsz" err="pods \"cilium-tmgsz\" is forbidden: User \"system:node:172-238-170-212\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-170-212' and this object" Nov 24 00:41:14.122469 kubelet[2717]: I1124 00:41:14.122425 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hostproc\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122469 kubelet[2717]: I1124 00:41:14.122462 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cni-path\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122469 kubelet[2717]: I1124 00:41:14.122482 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-lib-modules\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122746 kubelet[2717]: I1124 00:41:14.122509 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7582fc7b-3e47-4385-8952-721f4ee24d23-kube-proxy\") pod \"kube-proxy-sv262\" (UID: \"7582fc7b-3e47-4385-8952-721f4ee24d23\") " pod="kube-system/kube-proxy-sv262" Nov 24 00:41:14.122746 kubelet[2717]: I1124 00:41:14.122524 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7582fc7b-3e47-4385-8952-721f4ee24d23-xtables-lock\") pod \"kube-proxy-sv262\" (UID: \"7582fc7b-3e47-4385-8952-721f4ee24d23\") " pod="kube-system/kube-proxy-sv262" Nov 24 00:41:14.122746 kubelet[2717]: I1124 00:41:14.122539 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-bpf-maps\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122746 kubelet[2717]: I1124 00:41:14.122555 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7582fc7b-3e47-4385-8952-721f4ee24d23-lib-modules\") pod \"kube-proxy-sv262\" (UID: \"7582fc7b-3e47-4385-8952-721f4ee24d23\") " pod="kube-system/kube-proxy-sv262" Nov 24 00:41:14.122746 kubelet[2717]: I1124 00:41:14.122569 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-cgroup\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122746 kubelet[2717]: I1124 00:41:14.122584 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-xtables-lock\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122888 kubelet[2717]: I1124 00:41:14.122598 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86fa2431-929e-4af1-bca3-4df3a5ec27d2-clustermesh-secrets\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122888 kubelet[2717]: I1124 00:41:14.122612 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-config-path\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122888 kubelet[2717]: I1124 00:41:14.122639 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-host-proc-sys-net\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.122888 kubelet[2717]: I1124 00:41:14.122663 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-etc-cni-netd\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.124117 kubelet[2717]: I1124 00:41:14.123901 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-host-proc-sys-kernel\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.124117 kubelet[2717]: I1124 00:41:14.123997 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hubble-tls\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.124117 kubelet[2717]: I1124 00:41:14.124023 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9mn\" (UniqueName: \"kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-kube-api-access-jt9mn\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.124117 kubelet[2717]: I1124 00:41:14.124047 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jsw\" (UniqueName: \"kubernetes.io/projected/7582fc7b-3e47-4385-8952-721f4ee24d23-kube-api-access-45jsw\") pod \"kube-proxy-sv262\" (UID: \"7582fc7b-3e47-4385-8952-721f4ee24d23\") " pod="kube-system/kube-proxy-sv262" Nov 24 00:41:14.124117 kubelet[2717]: I1124 00:41:14.124063 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-run\") pod \"cilium-tmgsz\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " pod="kube-system/cilium-tmgsz" Nov 24 00:41:14.233772 kubelet[2717]: E1124 00:41:14.233723 2717 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 24 00:41:14.233772 kubelet[2717]: E1124 00:41:14.233764 2717 projected.go:194] Error preparing data for projected volume kube-api-access-jt9mn for pod kube-system/cilium-tmgsz: configmap "kube-root-ca.crt" not found Nov 24 00:41:14.234044 kubelet[2717]: E1124 00:41:14.233831 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-kube-api-access-jt9mn podName:86fa2431-929e-4af1-bca3-4df3a5ec27d2 nodeName:}" failed. No retries permitted until 2025-11-24 00:41:14.733807122 +0000 UTC m=+5.635138574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jt9mn" (UniqueName: "kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-kube-api-access-jt9mn") pod "cilium-tmgsz" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2") : configmap "kube-root-ca.crt" not found Nov 24 00:41:14.234044 kubelet[2717]: E1124 00:41:14.234000 2717 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 24 00:41:14.234044 kubelet[2717]: E1124 00:41:14.234014 2717 projected.go:194] Error preparing data for projected volume kube-api-access-45jsw for pod kube-system/kube-proxy-sv262: configmap "kube-root-ca.crt" not found Nov 24 00:41:14.234258 kubelet[2717]: E1124 00:41:14.234051 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7582fc7b-3e47-4385-8952-721f4ee24d23-kube-api-access-45jsw podName:7582fc7b-3e47-4385-8952-721f4ee24d23 nodeName:}" failed. No retries permitted until 2025-11-24 00:41:14.734041849 +0000 UTC m=+5.635373301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-45jsw" (UniqueName: "kubernetes.io/projected/7582fc7b-3e47-4385-8952-721f4ee24d23-kube-api-access-45jsw") pod "kube-proxy-sv262" (UID: "7582fc7b-3e47-4385-8952-721f4ee24d23") : configmap "kube-root-ca.crt" not found Nov 24 00:41:14.425208 systemd[1]: Created slice kubepods-besteffort-pod57089cda_6f65_4772_9a4c_a21d8b0b060c.slice - libcontainer container kubepods-besteffort-pod57089cda_6f65_4772_9a4c_a21d8b0b060c.slice. Nov 24 00:41:14.426810 kubelet[2717]: I1124 00:41:14.426670 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57089cda-6f65-4772-9a4c-a21d8b0b060c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bcdhs\" (UID: \"57089cda-6f65-4772-9a4c-a21d8b0b060c\") " pod="kube-system/cilium-operator-6c4d7847fc-bcdhs" Nov 24 00:41:14.426899 kubelet[2717]: I1124 00:41:14.426837 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5vm\" (UniqueName: \"kubernetes.io/projected/57089cda-6f65-4772-9a4c-a21d8b0b060c-kube-api-access-gv5vm\") pod \"cilium-operator-6c4d7847fc-bcdhs\" (UID: \"57089cda-6f65-4772-9a4c-a21d8b0b060c\") " pod="kube-system/cilium-operator-6c4d7847fc-bcdhs" Nov 24 00:41:14.532409 kubelet[2717]: E1124 00:41:14.532351 2717 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 24 00:41:14.532409 kubelet[2717]: E1124 00:41:14.532390 2717 projected.go:194] Error preparing data for projected volume kube-api-access-gv5vm for pod kube-system/cilium-operator-6c4d7847fc-bcdhs: configmap "kube-root-ca.crt" not found Nov 24 00:41:14.532755 kubelet[2717]: E1124 00:41:14.532498 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/57089cda-6f65-4772-9a4c-a21d8b0b060c-kube-api-access-gv5vm podName:57089cda-6f65-4772-9a4c-a21d8b0b060c nodeName:}" failed. No retries permitted until 2025-11-24 00:41:15.03246733 +0000 UTC m=+5.933798782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gv5vm" (UniqueName: "kubernetes.io/projected/57089cda-6f65-4772-9a4c-a21d8b0b060c-kube-api-access-gv5vm") pod "cilium-operator-6c4d7847fc-bcdhs" (UID: "57089cda-6f65-4772-9a4c-a21d8b0b060c") : configmap "kube-root-ca.crt" not found Nov 24 00:41:14.969494 kubelet[2717]: E1124 00:41:14.969446 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:14.970564 containerd[1552]: time="2025-11-24T00:41:14.970412243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sv262,Uid:7582fc7b-3e47-4385-8952-721f4ee24d23,Namespace:kube-system,Attempt:0,}" Nov 24 00:41:14.997700 containerd[1552]: time="2025-11-24T00:41:14.997627096Z" level=info msg="connecting to shim 714e977b7f2a46541e9df1c99c3dfec89bdb4c629f2b730403d92af36008d7cd" address="unix:///run/containerd/s/b13e43f0219af3cad2a9ed66e5cac1c15d153af6fd0f0df9f40a08a01c4dfb0f" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:41:15.028822 systemd[1]: Started cri-containerd-714e977b7f2a46541e9df1c99c3dfec89bdb4c629f2b730403d92af36008d7cd.scope - libcontainer container 714e977b7f2a46541e9df1c99c3dfec89bdb4c629f2b730403d92af36008d7cd. Nov 24 00:41:15.063187 containerd[1552]: time="2025-11-24T00:41:15.063137340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sv262,Uid:7582fc7b-3e47-4385-8952-721f4ee24d23,Namespace:kube-system,Attempt:0,} returns sandbox id \"714e977b7f2a46541e9df1c99c3dfec89bdb4c629f2b730403d92af36008d7cd\"" Nov 24 00:41:15.063796 kubelet[2717]: E1124 00:41:15.063772 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:15.073027 containerd[1552]: time="2025-11-24T00:41:15.072734535Z" level=info msg="CreateContainer within sandbox \"714e977b7f2a46541e9df1c99c3dfec89bdb4c629f2b730403d92af36008d7cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:41:15.082896 containerd[1552]: time="2025-11-24T00:41:15.082876247Z" level=info msg="Container 9ec2e6e9b4d0dae9ad0591b06ee3c9dd137525fe354b3ba14f742f5e4f9a23ad: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:15.087971 containerd[1552]: time="2025-11-24T00:41:15.087946813Z" level=info msg="CreateContainer within sandbox \"714e977b7f2a46541e9df1c99c3dfec89bdb4c629f2b730403d92af36008d7cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ec2e6e9b4d0dae9ad0591b06ee3c9dd137525fe354b3ba14f742f5e4f9a23ad\"" Nov 24 00:41:15.088553 containerd[1552]: time="2025-11-24T00:41:15.088518191Z" level=info msg="StartContainer for \"9ec2e6e9b4d0dae9ad0591b06ee3c9dd137525fe354b3ba14f742f5e4f9a23ad\"" Nov 24 00:41:15.090060 containerd[1552]: time="2025-11-24T00:41:15.090040088Z" level=info msg="connecting to shim 9ec2e6e9b4d0dae9ad0591b06ee3c9dd137525fe354b3ba14f742f5e4f9a23ad" address="unix:///run/containerd/s/b13e43f0219af3cad2a9ed66e5cac1c15d153af6fd0f0df9f40a08a01c4dfb0f" protocol=ttrpc version=3 Nov 24 00:41:15.116805 systemd[1]: Started cri-containerd-9ec2e6e9b4d0dae9ad0591b06ee3c9dd137525fe354b3ba14f742f5e4f9a23ad.scope - libcontainer container 9ec2e6e9b4d0dae9ad0591b06ee3c9dd137525fe354b3ba14f742f5e4f9a23ad. Nov 24 00:41:15.188779 containerd[1552]: time="2025-11-24T00:41:15.188736206Z" level=info msg="StartContainer for \"9ec2e6e9b4d0dae9ad0591b06ee3c9dd137525fe354b3ba14f742f5e4f9a23ad\" returns successfully" Nov 24 00:41:15.226362 kubelet[2717]: E1124 00:41:15.225885 2717 projected.go:264] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Nov 24 00:41:15.226362 kubelet[2717]: E1124 00:41:15.225934 2717 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-tmgsz: failed to sync secret cache: timed out waiting for the condition Nov 24 00:41:15.226362 kubelet[2717]: E1124 00:41:15.226011 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hubble-tls podName:86fa2431-929e-4af1-bca3-4df3a5ec27d2 nodeName:}" failed. No retries permitted until 2025-11-24 00:41:15.725986083 +0000 UTC m=+6.627317535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hubble-tls") pod "cilium-tmgsz" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2") : failed to sync secret cache: timed out waiting for the condition Nov 24 00:41:15.226362 kubelet[2717]: E1124 00:41:15.226224 2717 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Nov 24 00:41:15.226362 kubelet[2717]: E1124 00:41:15.226261 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-config-path podName:86fa2431-929e-4af1-bca3-4df3a5ec27d2 nodeName:}" failed. No retries permitted until 2025-11-24 00:41:15.726254192 +0000 UTC m=+6.627585644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-config-path") pod "cilium-tmgsz" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2") : failed to sync configmap cache: timed out waiting for the condition Nov 24 00:41:15.251200 kubelet[2717]: E1124 00:41:15.251168 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:15.260215 kubelet[2717]: I1124 00:41:15.259968 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sv262" podStartSLOduration=1.259954949 podStartE2EDuration="1.259954949s" podCreationTimestamp="2025-11-24 00:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:41:15.259693841 +0000 UTC m=+6.161025293" watchObservedRunningTime="2025-11-24 00:41:15.259954949 +0000 UTC m=+6.161286401" Nov 24 00:41:15.471886 kubelet[2717]: E1124 00:41:15.471301 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:15.631929 kubelet[2717]: E1124 00:41:15.631888 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:15.633689 containerd[1552]: time="2025-11-24T00:41:15.633612913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bcdhs,Uid:57089cda-6f65-4772-9a4c-a21d8b0b060c,Namespace:kube-system,Attempt:0,}" Nov 24 00:41:15.649670 containerd[1552]: time="2025-11-24T00:41:15.649621726Z" level=info msg="connecting to shim 0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0" address="unix:///run/containerd/s/0bb5c1c0edd504e4cf83c7e52d8b5a0541cde3931efb3f9d01313952b8510136" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:41:15.680840 systemd[1]: Started cri-containerd-0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0.scope - libcontainer container 0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0. Nov 24 00:41:15.743361 containerd[1552]: time="2025-11-24T00:41:15.743212778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bcdhs,Uid:57089cda-6f65-4772-9a4c-a21d8b0b060c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\"" Nov 24 00:41:15.744062 kubelet[2717]: E1124 00:41:15.743981 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:15.746652 containerd[1552]: time="2025-11-24T00:41:15.746551551Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 24 00:41:15.888728 kubelet[2717]: E1124 00:41:15.888575 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:15.889415 containerd[1552]: time="2025-11-24T00:41:15.889360558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tmgsz,Uid:86fa2431-929e-4af1-bca3-4df3a5ec27d2,Namespace:kube-system,Attempt:0,}" Nov 24 00:41:15.911553 containerd[1552]: time="2025-11-24T00:41:15.911512420Z" level=info msg="connecting to shim cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1" address="unix:///run/containerd/s/731d3876e2b5aacccde02a99a1c5320b52adc8533719b2df477f1b9c9854f595" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:41:15.942808 systemd[1]: Started cri-containerd-cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1.scope - libcontainer container cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1. Nov 24 00:41:15.966580 containerd[1552]: time="2025-11-24T00:41:15.966547264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tmgsz,Uid:86fa2431-929e-4af1-bca3-4df3a5ec27d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\"" Nov 24 00:41:15.967494 kubelet[2717]: E1124 00:41:15.967472 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:16.256030 kubelet[2717]: E1124 00:41:16.255861 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:16.837138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535194329.mount: Deactivated successfully. Nov 24 00:41:17.209645 containerd[1552]: time="2025-11-24T00:41:17.209382757Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:41:17.210350 containerd[1552]: time="2025-11-24T00:41:17.210193619Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 24 00:41:17.210782 containerd[1552]: time="2025-11-24T00:41:17.210744974Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:41:17.211795 containerd[1552]: time="2025-11-24T00:41:17.211711600Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.465115158s" Nov 24 00:41:17.211795 containerd[1552]: time="2025-11-24T00:41:17.211739011Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 24 00:41:17.214285 containerd[1552]: time="2025-11-24T00:41:17.212808360Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 24 00:41:17.217288 containerd[1552]: time="2025-11-24T00:41:17.217260940Z" level=info msg="CreateContainer within sandbox \"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 24 00:41:17.225721 containerd[1552]: time="2025-11-24T00:41:17.225239266Z" level=info msg="Container ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:17.229442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2720004006.mount: Deactivated successfully. Nov 24 00:41:17.233726 containerd[1552]: time="2025-11-24T00:41:17.233665834Z" level=info msg="CreateContainer within sandbox \"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\"" Nov 24 00:41:17.234861 containerd[1552]: time="2025-11-24T00:41:17.234104776Z" level=info msg="StartContainer for \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\"" Nov 24 00:41:17.234861 containerd[1552]: time="2025-11-24T00:41:17.234839666Z" level=info msg="connecting to shim ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1" address="unix:///run/containerd/s/0bb5c1c0edd504e4cf83c7e52d8b5a0541cde3931efb3f9d01313952b8510136" protocol=ttrpc version=3 Nov 24 00:41:17.257861 systemd[1]: Started cri-containerd-ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1.scope - libcontainer container ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1. Nov 24 00:41:17.258457 kubelet[2717]: E1124 00:41:17.258306 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:17.298198 containerd[1552]: time="2025-11-24T00:41:17.298155380Z" level=info msg="StartContainer for \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" returns successfully" Nov 24 00:41:17.693710 kubelet[2717]: E1124 00:41:17.693313 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:18.272709 kubelet[2717]: E1124 00:41:18.272200 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:18.272709 kubelet[2717]: E1124 00:41:18.272475 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:19.276269 kubelet[2717]: E1124 00:41:19.276182 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:20.901552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount194977490.mount: Deactivated successfully. Nov 24 00:41:22.430489 containerd[1552]: time="2025-11-24T00:41:22.430212151Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:41:22.431634 containerd[1552]: time="2025-11-24T00:41:22.431589148Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 24 00:41:22.432655 containerd[1552]: time="2025-11-24T00:41:22.432115988Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:41:22.433982 containerd[1552]: time="2025-11-24T00:41:22.433940534Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.221105774s" Nov 24 00:41:22.434030 containerd[1552]: time="2025-11-24T00:41:22.433982205Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 24 00:41:22.438438 containerd[1552]: time="2025-11-24T00:41:22.438400521Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 24 00:41:22.446016 containerd[1552]: time="2025-11-24T00:41:22.445392948Z" level=info msg="Container 0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:22.463270 containerd[1552]: time="2025-11-24T00:41:22.463235188Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\"" Nov 24 00:41:22.463712 containerd[1552]: time="2025-11-24T00:41:22.463658027Z" level=info msg="StartContainer for \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\"" Nov 24 00:41:22.465940 containerd[1552]: time="2025-11-24T00:41:22.465894790Z" level=info msg="connecting to shim 0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8" address="unix:///run/containerd/s/731d3876e2b5aacccde02a99a1c5320b52adc8533719b2df477f1b9c9854f595" protocol=ttrpc version=3 Nov 24 00:41:22.495838 systemd[1]: Started cri-containerd-0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8.scope - libcontainer container 0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8. Nov 24 00:41:22.535191 containerd[1552]: time="2025-11-24T00:41:22.535110337Z" level=info msg="StartContainer for \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\" returns successfully" Nov 24 00:41:22.551569 systemd[1]: cri-containerd-0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8.scope: Deactivated successfully. Nov 24 00:41:22.559000 containerd[1552]: time="2025-11-24T00:41:22.558945864Z" level=info msg="received container exit event container_id:\"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\" id:\"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\" pid:3192 exited_at:{seconds:1763944882 nanos:557955984}" Nov 24 00:41:22.583403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8-rootfs.mount: Deactivated successfully. Nov 24 00:41:23.055717 kubelet[2717]: E1124 00:41:23.053818 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:23.065970 kubelet[2717]: I1124 00:41:23.065840 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bcdhs" podStartSLOduration=7.5989448280000005 podStartE2EDuration="9.065818227s" podCreationTimestamp="2025-11-24 00:41:14 +0000 UTC" firstStartedPulling="2025-11-24 00:41:15.745744446 +0000 UTC m=+6.647075898" lastFinishedPulling="2025-11-24 00:41:17.212617835 +0000 UTC m=+8.113949297" observedRunningTime="2025-11-24 00:41:18.315211259 +0000 UTC m=+9.216542711" watchObservedRunningTime="2025-11-24 00:41:23.065818227 +0000 UTC m=+13.967149679" Nov 24 00:41:23.286543 kubelet[2717]: E1124 00:41:23.286480 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:23.293256 containerd[1552]: time="2025-11-24T00:41:23.293191895Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 24 00:41:23.305313 containerd[1552]: time="2025-11-24T00:41:23.305236206Z" level=info msg="Container 7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:23.311723 containerd[1552]: time="2025-11-24T00:41:23.311599253Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\"" Nov 24 00:41:23.313768 containerd[1552]: time="2025-11-24T00:41:23.312579751Z" level=info msg="StartContainer for \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\"" Nov 24 00:41:23.313768 containerd[1552]: time="2025-11-24T00:41:23.313748322Z" level=info msg="connecting to shim 7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82" address="unix:///run/containerd/s/731d3876e2b5aacccde02a99a1c5320b52adc8533719b2df477f1b9c9854f595" protocol=ttrpc version=3 Nov 24 00:41:23.353852 systemd[1]: Started cri-containerd-7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82.scope - libcontainer container 7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82. Nov 24 00:41:23.391565 containerd[1552]: time="2025-11-24T00:41:23.391507061Z" level=info msg="StartContainer for \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\" returns successfully" Nov 24 00:41:23.416398 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:41:23.417059 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:41:23.417492 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:41:23.419759 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:41:23.424020 systemd[1]: cri-containerd-7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82.scope: Deactivated successfully. Nov 24 00:41:23.426004 containerd[1552]: time="2025-11-24T00:41:23.425952874Z" level=info msg="received container exit event container_id:\"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\" id:\"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\" pid:3234 exited_at:{seconds:1763944883 nanos:425274391}" Nov 24 00:41:23.458839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:41:23.476078 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82-rootfs.mount: Deactivated successfully. Nov 24 00:41:24.288973 kubelet[2717]: E1124 00:41:24.288947 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:24.294101 containerd[1552]: time="2025-11-24T00:41:24.294022626Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 24 00:41:24.308389 containerd[1552]: time="2025-11-24T00:41:24.307352875Z" level=info msg="Container 244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:24.314248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532507304.mount: Deactivated successfully. Nov 24 00:41:24.320790 containerd[1552]: time="2025-11-24T00:41:24.320307398Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\"" Nov 24 00:41:24.322479 containerd[1552]: time="2025-11-24T00:41:24.321428468Z" level=info msg="StartContainer for \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\"" Nov 24 00:41:24.323282 containerd[1552]: time="2025-11-24T00:41:24.323238299Z" level=info msg="connecting to shim 244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a" address="unix:///run/containerd/s/731d3876e2b5aacccde02a99a1c5320b52adc8533719b2df477f1b9c9854f595" protocol=ttrpc version=3 Nov 24 00:41:24.350029 systemd[1]: Started cri-containerd-244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a.scope - libcontainer container 244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a. Nov 24 00:41:24.434466 containerd[1552]: time="2025-11-24T00:41:24.434079008Z" level=info msg="StartContainer for \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\" returns successfully" Nov 24 00:41:24.440040 systemd[1]: cri-containerd-244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a.scope: Deactivated successfully. Nov 24 00:41:24.442142 containerd[1552]: time="2025-11-24T00:41:24.441999934Z" level=info msg="received container exit event container_id:\"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\" id:\"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\" pid:3280 exited_at:{seconds:1763944884 nanos:441848612}" Nov 24 00:41:24.469798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a-rootfs.mount: Deactivated successfully. Nov 24 00:41:25.292899 kubelet[2717]: E1124 00:41:25.292856 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:25.299649 containerd[1552]: time="2025-11-24T00:41:25.299575484Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 24 00:41:25.313571 containerd[1552]: time="2025-11-24T00:41:25.312631235Z" level=info msg="Container b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:25.320596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount176866113.mount: Deactivated successfully. Nov 24 00:41:25.327795 containerd[1552]: time="2025-11-24T00:41:25.327747999Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\"" Nov 24 00:41:25.328732 containerd[1552]: time="2025-11-24T00:41:25.328632253Z" level=info msg="StartContainer for \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\"" Nov 24 00:41:25.330012 containerd[1552]: time="2025-11-24T00:41:25.329968165Z" level=info msg="connecting to shim b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39" address="unix:///run/containerd/s/731d3876e2b5aacccde02a99a1c5320b52adc8533719b2df477f1b9c9854f595" protocol=ttrpc version=3 Nov 24 00:41:25.360816 systemd[1]: Started cri-containerd-b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39.scope - libcontainer container b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39. Nov 24 00:41:25.400085 systemd[1]: cri-containerd-b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39.scope: Deactivated successfully. Nov 24 00:41:25.404375 containerd[1552]: time="2025-11-24T00:41:25.404327126Z" level=info msg="received container exit event container_id:\"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\" id:\"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\" pid:3321 exited_at:{seconds:1763944885 nanos:404135943}" Nov 24 00:41:25.407923 containerd[1552]: time="2025-11-24T00:41:25.407882333Z" level=info msg="StartContainer for \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\" returns successfully" Nov 24 00:41:25.439825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39-rootfs.mount: Deactivated successfully. Nov 24 00:41:25.498884 update_engine[1533]: I20251124 00:41:25.498800 1533 update_attempter.cc:509] Updating boot flags... Nov 24 00:41:26.299793 kubelet[2717]: E1124 00:41:26.298665 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:26.305091 containerd[1552]: time="2025-11-24T00:41:26.304459673Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 24 00:41:26.319917 containerd[1552]: time="2025-11-24T00:41:26.319232297Z" level=info msg="Container 4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:26.333182 containerd[1552]: time="2025-11-24T00:41:26.333147107Z" level=info msg="CreateContainer within sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\"" Nov 24 00:41:26.334798 containerd[1552]: time="2025-11-24T00:41:26.334744832Z" level=info msg="StartContainer for \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\"" Nov 24 00:41:26.336075 containerd[1552]: time="2025-11-24T00:41:26.335989570Z" level=info msg="connecting to shim 4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b" address="unix:///run/containerd/s/731d3876e2b5aacccde02a99a1c5320b52adc8533719b2df477f1b9c9854f595" protocol=ttrpc version=3 Nov 24 00:41:26.359816 systemd[1]: Started cri-containerd-4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b.scope - libcontainer container 4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b. Nov 24 00:41:26.419230 containerd[1552]: time="2025-11-24T00:41:26.419188290Z" level=info msg="StartContainer for \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" returns successfully" Nov 24 00:41:26.539453 kubelet[2717]: I1124 00:41:26.539416 2717 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:41:26.594188 systemd[1]: Created slice kubepods-burstable-poda64f0c37_42b7_4214_9b88_8d83c8062880.slice - libcontainer container kubepods-burstable-poda64f0c37_42b7_4214_9b88_8d83c8062880.slice. Nov 24 00:41:26.608246 systemd[1]: Created slice kubepods-burstable-podc4ab3d9a_454a_4c9a_92be_af9362a53614.slice - libcontainer container kubepods-burstable-podc4ab3d9a_454a_4c9a_92be_af9362a53614.slice. Nov 24 00:41:26.617711 kubelet[2717]: I1124 00:41:26.617664 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg8l7\" (UniqueName: \"kubernetes.io/projected/a64f0c37-42b7-4214-9b88-8d83c8062880-kube-api-access-mg8l7\") pod \"coredns-674b8bbfcf-gwth2\" (UID: \"a64f0c37-42b7-4214-9b88-8d83c8062880\") " pod="kube-system/coredns-674b8bbfcf-gwth2" Nov 24 00:41:26.617876 kubelet[2717]: I1124 00:41:26.617858 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvpnm\" (UniqueName: \"kubernetes.io/projected/c4ab3d9a-454a-4c9a-92be-af9362a53614-kube-api-access-dvpnm\") pod \"coredns-674b8bbfcf-mrjfp\" (UID: \"c4ab3d9a-454a-4c9a-92be-af9362a53614\") " pod="kube-system/coredns-674b8bbfcf-mrjfp" Nov 24 00:41:26.618028 kubelet[2717]: I1124 00:41:26.617974 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4ab3d9a-454a-4c9a-92be-af9362a53614-config-volume\") pod \"coredns-674b8bbfcf-mrjfp\" (UID: \"c4ab3d9a-454a-4c9a-92be-af9362a53614\") " pod="kube-system/coredns-674b8bbfcf-mrjfp" Nov 24 00:41:26.618103 kubelet[2717]: I1124 00:41:26.618091 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a64f0c37-42b7-4214-9b88-8d83c8062880-config-volume\") pod \"coredns-674b8bbfcf-gwth2\" (UID: \"a64f0c37-42b7-4214-9b88-8d83c8062880\") " pod="kube-system/coredns-674b8bbfcf-gwth2" Nov 24 00:41:26.904871 kubelet[2717]: E1124 00:41:26.903381 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:26.904970 containerd[1552]: time="2025-11-24T00:41:26.904122381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gwth2,Uid:a64f0c37-42b7-4214-9b88-8d83c8062880,Namespace:kube-system,Attempt:0,}" Nov 24 00:41:26.912058 kubelet[2717]: E1124 00:41:26.912040 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:26.914252 containerd[1552]: time="2025-11-24T00:41:26.914148523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mrjfp,Uid:c4ab3d9a-454a-4c9a-92be-af9362a53614,Namespace:kube-system,Attempt:0,}" Nov 24 00:41:27.304558 kubelet[2717]: E1124 00:41:27.304433 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:27.343186 kubelet[2717]: I1124 00:41:27.343104 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tmgsz" podStartSLOduration=6.877422546 podStartE2EDuration="13.343066162s" podCreationTimestamp="2025-11-24 00:41:14 +0000 UTC" firstStartedPulling="2025-11-24 00:41:15.969181715 +0000 UTC m=+6.870513167" lastFinishedPulling="2025-11-24 00:41:22.434825331 +0000 UTC m=+13.336156783" observedRunningTime="2025-11-24 00:41:27.338763371 +0000 UTC m=+18.240094823" watchObservedRunningTime="2025-11-24 00:41:27.343066162 +0000 UTC m=+18.244397614" Nov 24 00:41:28.307494 kubelet[2717]: E1124 00:41:28.307438 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:28.663373 systemd-networkd[1426]: cilium_host: Link UP Nov 24 00:41:28.664339 systemd-networkd[1426]: cilium_net: Link UP Nov 24 00:41:28.664918 systemd-networkd[1426]: cilium_net: Gained carrier Nov 24 00:41:28.665137 systemd-networkd[1426]: cilium_host: Gained carrier Nov 24 00:41:28.797455 systemd-networkd[1426]: cilium_vxlan: Link UP Nov 24 00:41:28.797466 systemd-networkd[1426]: cilium_vxlan: Gained carrier Nov 24 00:41:28.879912 systemd-networkd[1426]: cilium_net: Gained IPv6LL Nov 24 00:41:29.027705 kernel: NET: Registered PF_ALG protocol family Nov 24 00:41:29.310704 kubelet[2717]: E1124 00:41:29.310575 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:29.641025 systemd-networkd[1426]: cilium_host: Gained IPv6LL Nov 24 00:41:29.794538 systemd-networkd[1426]: lxc_health: Link UP Nov 24 00:41:29.809503 systemd-networkd[1426]: lxc_health: Gained carrier Nov 24 00:41:29.949292 kernel: eth0: renamed from tmp4b10e Nov 24 00:41:29.948424 systemd-networkd[1426]: lxc07c93eaa1e20: Link UP Nov 24 00:41:29.951710 systemd-networkd[1426]: lxc07c93eaa1e20: Gained carrier Nov 24 00:41:29.983140 kernel: eth0: renamed from tmpe1ddd Nov 24 00:41:29.988978 systemd-networkd[1426]: lxc22c0f61c14b7: Link UP Nov 24 00:41:29.990898 systemd-networkd[1426]: lxc22c0f61c14b7: Gained carrier Nov 24 00:41:30.312443 kubelet[2717]: E1124 00:41:30.312382 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:30.536874 systemd-networkd[1426]: cilium_vxlan: Gained IPv6LL Nov 24 00:41:31.176795 systemd-networkd[1426]: lxc_health: Gained IPv6LL Nov 24 00:41:31.179797 systemd-networkd[1426]: lxc22c0f61c14b7: Gained IPv6LL Nov 24 00:41:31.314566 kubelet[2717]: E1124 00:41:31.314504 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:31.432856 systemd-networkd[1426]: lxc07c93eaa1e20: Gained IPv6LL Nov 24 00:41:32.317731 kubelet[2717]: E1124 00:41:32.316865 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:33.267338 containerd[1552]: time="2025-11-24T00:41:33.266403010Z" level=info msg="connecting to shim e1ddda43a920c7a630234a65984a79f8e61faeb735c1d375d07a71a116662ebc" address="unix:///run/containerd/s/03c86831856c898076b4a78be41514d225fd2b8570f86475cec7b0ba9a99005a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:41:33.285085 containerd[1552]: time="2025-11-24T00:41:33.285021050Z" level=info msg="connecting to shim 4b10ea7bb5fca201ebacf7d19b372652977006621bb2316163a656d72a219b4b" address="unix:///run/containerd/s/fc5e18947a15f8309c9150a5c0ab10f02edd1f657d3ae849a8168824eda037fc" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:41:33.327043 systemd[1]: Started cri-containerd-4b10ea7bb5fca201ebacf7d19b372652977006621bb2316163a656d72a219b4b.scope - libcontainer container 4b10ea7bb5fca201ebacf7d19b372652977006621bb2316163a656d72a219b4b. Nov 24 00:41:33.341170 systemd[1]: Started cri-containerd-e1ddda43a920c7a630234a65984a79f8e61faeb735c1d375d07a71a116662ebc.scope - libcontainer container e1ddda43a920c7a630234a65984a79f8e61faeb735c1d375d07a71a116662ebc. Nov 24 00:41:33.432123 containerd[1552]: time="2025-11-24T00:41:33.432043096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gwth2,Uid:a64f0c37-42b7-4214-9b88-8d83c8062880,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b10ea7bb5fca201ebacf7d19b372652977006621bb2316163a656d72a219b4b\"" Nov 24 00:41:33.434244 kubelet[2717]: E1124 00:41:33.432956 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:33.440077 containerd[1552]: time="2025-11-24T00:41:33.440035233Z" level=info msg="CreateContainer within sandbox \"4b10ea7bb5fca201ebacf7d19b372652977006621bb2316163a656d72a219b4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:41:33.443844 containerd[1552]: time="2025-11-24T00:41:33.442717059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mrjfp,Uid:c4ab3d9a-454a-4c9a-92be-af9362a53614,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ddda43a920c7a630234a65984a79f8e61faeb735c1d375d07a71a116662ebc\"" Nov 24 00:41:33.444723 kubelet[2717]: E1124 00:41:33.444705 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:33.452127 containerd[1552]: time="2025-11-24T00:41:33.452087169Z" level=info msg="CreateContainer within sandbox \"e1ddda43a920c7a630234a65984a79f8e61faeb735c1d375d07a71a116662ebc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:41:33.455433 containerd[1552]: time="2025-11-24T00:41:33.455401231Z" level=info msg="Container 2ec2d2955ebfc322f6fd1d55d6bac7406b0cb810b7ba8ab63eac7d25642ce542: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:33.462852 containerd[1552]: time="2025-11-24T00:41:33.462806703Z" level=info msg="CreateContainer within sandbox \"4b10ea7bb5fca201ebacf7d19b372652977006621bb2316163a656d72a219b4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ec2d2955ebfc322f6fd1d55d6bac7406b0cb810b7ba8ab63eac7d25642ce542\"" Nov 24 00:41:33.464008 containerd[1552]: time="2025-11-24T00:41:33.463810222Z" level=info msg="StartContainer for \"2ec2d2955ebfc322f6fd1d55d6bac7406b0cb810b7ba8ab63eac7d25642ce542\"" Nov 24 00:41:33.464496 containerd[1552]: time="2025-11-24T00:41:33.464453088Z" level=info msg="connecting to shim 2ec2d2955ebfc322f6fd1d55d6bac7406b0cb810b7ba8ab63eac7d25642ce542" address="unix:///run/containerd/s/fc5e18947a15f8309c9150a5c0ab10f02edd1f657d3ae849a8168824eda037fc" protocol=ttrpc version=3 Nov 24 00:41:33.464818 containerd[1552]: time="2025-11-24T00:41:33.464781932Z" level=info msg="Container a819f4d6ae2eab240acff207a1cb66aa70842a5ff14f9f82f0dd2b0e832a9f9f: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:41:33.470590 containerd[1552]: time="2025-11-24T00:41:33.470088403Z" level=info msg="CreateContainer within sandbox \"e1ddda43a920c7a630234a65984a79f8e61faeb735c1d375d07a71a116662ebc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a819f4d6ae2eab240acff207a1cb66aa70842a5ff14f9f82f0dd2b0e832a9f9f\"" Nov 24 00:41:33.472252 containerd[1552]: time="2025-11-24T00:41:33.472223373Z" level=info msg="StartContainer for \"a819f4d6ae2eab240acff207a1cb66aa70842a5ff14f9f82f0dd2b0e832a9f9f\"" Nov 24 00:41:33.475279 containerd[1552]: time="2025-11-24T00:41:33.475244052Z" level=info msg="connecting to shim a819f4d6ae2eab240acff207a1cb66aa70842a5ff14f9f82f0dd2b0e832a9f9f" address="unix:///run/containerd/s/03c86831856c898076b4a78be41514d225fd2b8570f86475cec7b0ba9a99005a" protocol=ttrpc version=3 Nov 24 00:41:33.502636 systemd[1]: Started cri-containerd-2ec2d2955ebfc322f6fd1d55d6bac7406b0cb810b7ba8ab63eac7d25642ce542.scope - libcontainer container 2ec2d2955ebfc322f6fd1d55d6bac7406b0cb810b7ba8ab63eac7d25642ce542. Nov 24 00:41:33.516023 systemd[1]: Started cri-containerd-a819f4d6ae2eab240acff207a1cb66aa70842a5ff14f9f82f0dd2b0e832a9f9f.scope - libcontainer container a819f4d6ae2eab240acff207a1cb66aa70842a5ff14f9f82f0dd2b0e832a9f9f. Nov 24 00:41:33.568283 containerd[1552]: time="2025-11-24T00:41:33.568214608Z" level=info msg="StartContainer for \"2ec2d2955ebfc322f6fd1d55d6bac7406b0cb810b7ba8ab63eac7d25642ce542\" returns successfully" Nov 24 00:41:33.583969 containerd[1552]: time="2025-11-24T00:41:33.583933600Z" level=info msg="StartContainer for \"a819f4d6ae2eab240acff207a1cb66aa70842a5ff14f9f82f0dd2b0e832a9f9f\" returns successfully" Nov 24 00:41:34.250599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046261088.mount: Deactivated successfully. Nov 24 00:41:34.330746 kubelet[2717]: E1124 00:41:34.330592 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:34.339141 kubelet[2717]: E1124 00:41:34.339099 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:34.363644 kubelet[2717]: I1124 00:41:34.363573 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gwth2" podStartSLOduration=20.363552974 podStartE2EDuration="20.363552974s" podCreationTimestamp="2025-11-24 00:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:41:34.347070525 +0000 UTC m=+25.248401977" watchObservedRunningTime="2025-11-24 00:41:34.363552974 +0000 UTC m=+25.264884426" Nov 24 00:41:34.384361 kubelet[2717]: I1124 00:41:34.384299 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mrjfp" podStartSLOduration=20.384266821 podStartE2EDuration="20.384266821s" podCreationTimestamp="2025-11-24 00:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:41:34.382952439 +0000 UTC m=+25.284283891" watchObservedRunningTime="2025-11-24 00:41:34.384266821 +0000 UTC m=+25.285598273" Nov 24 00:41:35.342119 kubelet[2717]: E1124 00:41:35.342072 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:35.343025 kubelet[2717]: E1124 00:41:35.343008 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:36.343929 kubelet[2717]: E1124 00:41:36.343842 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:41:36.343929 kubelet[2717]: E1124 00:41:36.343862 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:42:33.201988 kubelet[2717]: E1124 00:42:33.201236 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:42:34.201050 kubelet[2717]: E1124 00:42:34.201013 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:42:37.203121 kubelet[2717]: E1124 00:42:37.202748 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:42:37.203121 kubelet[2717]: E1124 00:42:37.202821 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:42:37.203121 kubelet[2717]: E1124 00:42:37.203043 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:42:39.201132 kubelet[2717]: E1124 00:42:39.201003 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:42:46.201491 kubelet[2717]: E1124 00:42:46.201444 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:01.201057 kubelet[2717]: E1124 00:43:01.200930 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:06.412348 systemd[1]: Started sshd@7-172.238.170.212:22-147.75.109.163:54266.service - OpenSSH per-connection server daemon (147.75.109.163:54266). Nov 24 00:43:06.744083 sshd[4046]: Accepted publickey for core from 147.75.109.163 port 54266 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:06.745779 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:06.751584 systemd-logind[1528]: New session 8 of user core. Nov 24 00:43:06.770844 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:43:07.089643 sshd[4049]: Connection closed by 147.75.109.163 port 54266 Nov 24 00:43:07.090751 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:07.097934 systemd[1]: sshd@7-172.238.170.212:22-147.75.109.163:54266.service: Deactivated successfully. Nov 24 00:43:07.101093 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:43:07.102669 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:43:07.105168 systemd-logind[1528]: Removed session 8. Nov 24 00:43:12.148719 systemd[1]: Started sshd@8-172.238.170.212:22-147.75.109.163:38152.service - OpenSSH per-connection server daemon (147.75.109.163:38152). Nov 24 00:43:12.466963 sshd[4065]: Accepted publickey for core from 147.75.109.163 port 38152 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:12.468510 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:12.473408 systemd-logind[1528]: New session 9 of user core. Nov 24 00:43:12.479790 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:43:12.761381 sshd[4068]: Connection closed by 147.75.109.163 port 38152 Nov 24 00:43:12.762270 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:12.767302 systemd[1]: sshd@8-172.238.170.212:22-147.75.109.163:38152.service: Deactivated successfully. Nov 24 00:43:12.769950 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:43:12.771125 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:43:12.773580 systemd-logind[1528]: Removed session 9. Nov 24 00:43:17.823129 systemd[1]: Started sshd@9-172.238.170.212:22-147.75.109.163:38156.service - OpenSSH per-connection server daemon (147.75.109.163:38156). Nov 24 00:43:18.154083 sshd[4083]: Accepted publickey for core from 147.75.109.163 port 38156 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:18.155508 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:18.160411 systemd-logind[1528]: New session 10 of user core. Nov 24 00:43:18.164790 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:43:18.454773 sshd[4086]: Connection closed by 147.75.109.163 port 38156 Nov 24 00:43:18.455534 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:18.460233 systemd[1]: sshd@9-172.238.170.212:22-147.75.109.163:38156.service: Deactivated successfully. Nov 24 00:43:18.462730 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:43:18.464252 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:43:18.466306 systemd-logind[1528]: Removed session 10. Nov 24 00:43:23.518038 systemd[1]: Started sshd@10-172.238.170.212:22-147.75.109.163:43388.service - OpenSSH per-connection server daemon (147.75.109.163:43388). Nov 24 00:43:23.854778 sshd[4099]: Accepted publickey for core from 147.75.109.163 port 43388 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:23.856237 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:23.861417 systemd-logind[1528]: New session 11 of user core. Nov 24 00:43:23.865821 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:43:24.167489 sshd[4102]: Connection closed by 147.75.109.163 port 43388 Nov 24 00:43:24.168308 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:24.172787 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:43:24.173743 systemd[1]: sshd@10-172.238.170.212:22-147.75.109.163:43388.service: Deactivated successfully. Nov 24 00:43:24.176319 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:43:24.178231 systemd-logind[1528]: Removed session 11. Nov 24 00:43:24.223520 systemd[1]: Started sshd@11-172.238.170.212:22-147.75.109.163:43392.service - OpenSSH per-connection server daemon (147.75.109.163:43392). Nov 24 00:43:24.551032 sshd[4115]: Accepted publickey for core from 147.75.109.163 port 43392 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:24.553114 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:24.563524 systemd-logind[1528]: New session 12 of user core. Nov 24 00:43:24.568823 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:43:24.898627 sshd[4118]: Connection closed by 147.75.109.163 port 43392 Nov 24 00:43:24.900843 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:24.907234 systemd[1]: sshd@11-172.238.170.212:22-147.75.109.163:43392.service: Deactivated successfully. Nov 24 00:43:24.908029 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:43:24.914211 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:43:24.918626 systemd-logind[1528]: Removed session 12. Nov 24 00:43:24.976658 systemd[1]: Started sshd@12-172.238.170.212:22-147.75.109.163:43400.service - OpenSSH per-connection server daemon (147.75.109.163:43400). Nov 24 00:43:25.320400 sshd[4128]: Accepted publickey for core from 147.75.109.163 port 43400 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:25.323152 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:25.330721 systemd-logind[1528]: New session 13 of user core. Nov 24 00:43:25.336841 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:43:25.636280 sshd[4131]: Connection closed by 147.75.109.163 port 43400 Nov 24 00:43:25.637290 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:25.642445 systemd[1]: sshd@12-172.238.170.212:22-147.75.109.163:43400.service: Deactivated successfully. Nov 24 00:43:25.645654 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:43:25.646783 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:43:25.649008 systemd-logind[1528]: Removed session 13. Nov 24 00:43:30.701908 systemd[1]: Started sshd@13-172.238.170.212:22-147.75.109.163:54492.service - OpenSSH per-connection server daemon (147.75.109.163:54492). Nov 24 00:43:31.052350 sshd[4143]: Accepted publickey for core from 147.75.109.163 port 54492 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:31.054162 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:31.063206 systemd-logind[1528]: New session 14 of user core. Nov 24 00:43:31.066808 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:43:31.353946 sshd[4146]: Connection closed by 147.75.109.163 port 54492 Nov 24 00:43:31.354914 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:31.359706 systemd[1]: sshd@13-172.238.170.212:22-147.75.109.163:54492.service: Deactivated successfully. Nov 24 00:43:31.362571 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:43:31.364019 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:43:31.365809 systemd-logind[1528]: Removed session 14. Nov 24 00:43:31.418710 systemd[1]: Started sshd@14-172.238.170.212:22-147.75.109.163:54508.service - OpenSSH per-connection server daemon (147.75.109.163:54508). Nov 24 00:43:31.757132 sshd[4159]: Accepted publickey for core from 147.75.109.163 port 54508 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:31.759255 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:31.765736 systemd-logind[1528]: New session 15 of user core. Nov 24 00:43:31.772910 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:43:32.090258 sshd[4162]: Connection closed by 147.75.109.163 port 54508 Nov 24 00:43:32.090897 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:32.097207 systemd[1]: sshd@14-172.238.170.212:22-147.75.109.163:54508.service: Deactivated successfully. Nov 24 00:43:32.100517 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:43:32.101760 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:43:32.103747 systemd-logind[1528]: Removed session 15. Nov 24 00:43:32.152883 systemd[1]: Started sshd@15-172.238.170.212:22-147.75.109.163:54524.service - OpenSSH per-connection server daemon (147.75.109.163:54524). Nov 24 00:43:32.486283 sshd[4172]: Accepted publickey for core from 147.75.109.163 port 54524 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:32.488105 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:32.494043 systemd-logind[1528]: New session 16 of user core. Nov 24 00:43:32.502007 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:43:33.368993 sshd[4175]: Connection closed by 147.75.109.163 port 54524 Nov 24 00:43:33.369405 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:33.375668 systemd[1]: sshd@15-172.238.170.212:22-147.75.109.163:54524.service: Deactivated successfully. Nov 24 00:43:33.376097 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:43:33.379406 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:43:33.381702 systemd-logind[1528]: Removed session 16. Nov 24 00:43:33.437392 systemd[1]: Started sshd@16-172.238.170.212:22-147.75.109.163:54528.service - OpenSSH per-connection server daemon (147.75.109.163:54528). Nov 24 00:43:33.777985 sshd[4193]: Accepted publickey for core from 147.75.109.163 port 54528 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:33.779858 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:33.786739 systemd-logind[1528]: New session 17 of user core. Nov 24 00:43:33.792926 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:43:34.202265 sshd[4196]: Connection closed by 147.75.109.163 port 54528 Nov 24 00:43:34.203301 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:34.207724 systemd[1]: sshd@16-172.238.170.212:22-147.75.109.163:54528.service: Deactivated successfully. Nov 24 00:43:34.210293 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:43:34.212445 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:43:34.214739 systemd-logind[1528]: Removed session 17. Nov 24 00:43:34.263158 systemd[1]: Started sshd@17-172.238.170.212:22-147.75.109.163:54544.service - OpenSSH per-connection server daemon (147.75.109.163:54544). Nov 24 00:43:34.598021 sshd[4206]: Accepted publickey for core from 147.75.109.163 port 54544 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:34.600213 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:34.606985 systemd-logind[1528]: New session 18 of user core. Nov 24 00:43:34.615828 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:43:34.897271 sshd[4210]: Connection closed by 147.75.109.163 port 54544 Nov 24 00:43:34.898636 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:34.905214 systemd[1]: sshd@17-172.238.170.212:22-147.75.109.163:54544.service: Deactivated successfully. Nov 24 00:43:34.908551 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:43:34.910107 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:43:34.911840 systemd-logind[1528]: Removed session 18. Nov 24 00:43:35.201815 kubelet[2717]: E1124 00:43:35.201115 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:39.968977 systemd[1]: Started sshd@18-172.238.170.212:22-147.75.109.163:54554.service - OpenSSH per-connection server daemon (147.75.109.163:54554). Nov 24 00:43:40.308562 sshd[4225]: Accepted publickey for core from 147.75.109.163 port 54554 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:40.310107 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:40.315863 systemd-logind[1528]: New session 19 of user core. Nov 24 00:43:40.322843 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:43:40.611070 sshd[4228]: Connection closed by 147.75.109.163 port 54554 Nov 24 00:43:40.612863 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:40.617974 systemd[1]: sshd@18-172.238.170.212:22-147.75.109.163:54554.service: Deactivated successfully. Nov 24 00:43:40.620753 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:43:40.621811 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:43:40.624423 systemd-logind[1528]: Removed session 19. Nov 24 00:43:41.200780 kubelet[2717]: E1124 00:43:41.200723 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:45.680065 systemd[1]: Started sshd@19-172.238.170.212:22-147.75.109.163:43452.service - OpenSSH per-connection server daemon (147.75.109.163:43452). Nov 24 00:43:46.023792 sshd[4242]: Accepted publickey for core from 147.75.109.163 port 43452 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:46.025798 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:46.032849 systemd-logind[1528]: New session 20 of user core. Nov 24 00:43:46.040831 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:43:46.329882 sshd[4245]: Connection closed by 147.75.109.163 port 43452 Nov 24 00:43:46.330755 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:46.336990 systemd[1]: sshd@19-172.238.170.212:22-147.75.109.163:43452.service: Deactivated successfully. Nov 24 00:43:46.340669 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:43:46.341834 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:43:46.343527 systemd-logind[1528]: Removed session 20. Nov 24 00:43:46.391604 systemd[1]: Started sshd@20-172.238.170.212:22-147.75.109.163:43464.service - OpenSSH per-connection server daemon (147.75.109.163:43464). Nov 24 00:43:46.731021 sshd[4256]: Accepted publickey for core from 147.75.109.163 port 43464 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:46.732237 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:46.742782 systemd-logind[1528]: New session 21 of user core. Nov 24 00:43:46.748837 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:43:48.236855 containerd[1552]: time="2025-11-24T00:43:48.236588661Z" level=info msg="StopContainer for \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" with timeout 30 (s)" Nov 24 00:43:48.238483 containerd[1552]: time="2025-11-24T00:43:48.238454917Z" level=info msg="Stop container \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" with signal terminated" Nov 24 00:43:48.257627 systemd[1]: cri-containerd-ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1.scope: Deactivated successfully. Nov 24 00:43:48.264992 containerd[1552]: time="2025-11-24T00:43:48.264918347Z" level=info msg="received container exit event container_id:\"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" id:\"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" pid:3128 exited_at:{seconds:1763945028 nanos:263272429}" Nov 24 00:43:48.276428 containerd[1552]: time="2025-11-24T00:43:48.276376860Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:43:48.290135 containerd[1552]: time="2025-11-24T00:43:48.290007756Z" level=info msg="StopContainer for \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" with timeout 2 (s)" Nov 24 00:43:48.290427 containerd[1552]: time="2025-11-24T00:43:48.290380684Z" level=info msg="Stop container \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" with signal terminated" Nov 24 00:43:48.305906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1-rootfs.mount: Deactivated successfully. Nov 24 00:43:48.310185 systemd-networkd[1426]: lxc_health: Link DOWN Nov 24 00:43:48.310193 systemd-networkd[1426]: lxc_health: Lost carrier Nov 24 00:43:48.330991 systemd[1]: cri-containerd-4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b.scope: Deactivated successfully. Nov 24 00:43:48.332891 systemd[1]: cri-containerd-4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b.scope: Consumed 6.462s CPU time, 125.1M memory peak, 136K read from disk, 13.3M written to disk. Nov 24 00:43:48.334199 containerd[1552]: time="2025-11-24T00:43:48.334162852Z" level=info msg="StopContainer for \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" returns successfully" Nov 24 00:43:48.337485 containerd[1552]: time="2025-11-24T00:43:48.337406327Z" level=info msg="StopPodSandbox for \"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\"" Nov 24 00:43:48.337691 containerd[1552]: time="2025-11-24T00:43:48.337638785Z" level=info msg="Container to stop \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:43:48.343234 containerd[1552]: time="2025-11-24T00:43:48.343174153Z" level=info msg="received container exit event container_id:\"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" id:\"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" pid:3384 exited_at:{seconds:1763945028 nanos:341737994}" Nov 24 00:43:48.349844 systemd[1]: cri-containerd-0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0.scope: Deactivated successfully. Nov 24 00:43:48.357615 containerd[1552]: time="2025-11-24T00:43:48.357266646Z" level=info msg="received sandbox exit event container_id:\"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\" id:\"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\" exit_status:137 exited_at:{seconds:1763945028 nanos:357113308}" monitor_name=podsandbox Nov 24 00:43:48.394589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b-rootfs.mount: Deactivated successfully. Nov 24 00:43:48.402607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0-rootfs.mount: Deactivated successfully. Nov 24 00:43:48.405652 containerd[1552]: time="2025-11-24T00:43:48.405611640Z" level=info msg="shim disconnected" id=0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0 namespace=k8s.io Nov 24 00:43:48.406021 containerd[1552]: time="2025-11-24T00:43:48.405802289Z" level=warning msg="cleaning up after shim disconnected" id=0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0 namespace=k8s.io Nov 24 00:43:48.406021 containerd[1552]: time="2025-11-24T00:43:48.405815469Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 24 00:43:48.411433 containerd[1552]: time="2025-11-24T00:43:48.411411186Z" level=info msg="StopContainer for \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" returns successfully" Nov 24 00:43:48.412563 containerd[1552]: time="2025-11-24T00:43:48.412458398Z" level=info msg="StopPodSandbox for \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\"" Nov 24 00:43:48.412959 containerd[1552]: time="2025-11-24T00:43:48.412900065Z" level=info msg="Container to stop \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:43:48.412959 containerd[1552]: time="2025-11-24T00:43:48.412920545Z" level=info msg="Container to stop \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:43:48.412959 containerd[1552]: time="2025-11-24T00:43:48.412930485Z" level=info msg="Container to stop \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:43:48.413217 containerd[1552]: time="2025-11-24T00:43:48.413142593Z" level=info msg="Container to stop \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:43:48.413217 containerd[1552]: time="2025-11-24T00:43:48.413167453Z" level=info msg="Container to stop \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 24 00:43:48.424460 systemd[1]: cri-containerd-cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1.scope: Deactivated successfully. Nov 24 00:43:48.424874 containerd[1552]: time="2025-11-24T00:43:48.424722035Z" level=info msg="received sandbox container exit event sandbox_id:\"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\" exit_status:137 exited_at:{seconds:1763945028 nanos:357113308}" monitor_name=criService Nov 24 00:43:48.429327 containerd[1552]: time="2025-11-24T00:43:48.428999883Z" level=info msg="TearDown network for sandbox \"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\" successfully" Nov 24 00:43:48.429327 containerd[1552]: time="2025-11-24T00:43:48.429029983Z" level=info msg="StopPodSandbox for \"0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0\" returns successfully" Nov 24 00:43:48.430567 containerd[1552]: time="2025-11-24T00:43:48.430497532Z" level=info msg="received sandbox exit event container_id:\"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" id:\"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" exit_status:137 exited_at:{seconds:1763945028 nanos:430235974}" monitor_name=podsandbox Nov 24 00:43:48.433237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f1334b60459dbda266e9e9e6efbb31714f83cf23f46d60b435e160cc4d2a6c0-shm.mount: Deactivated successfully. Nov 24 00:43:48.468599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1-rootfs.mount: Deactivated successfully. Nov 24 00:43:48.472800 containerd[1552]: time="2025-11-24T00:43:48.472766611Z" level=info msg="shim disconnected" id=cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1 namespace=k8s.io Nov 24 00:43:48.473214 containerd[1552]: time="2025-11-24T00:43:48.473066019Z" level=warning msg="cleaning up after shim disconnected" id=cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1 namespace=k8s.io Nov 24 00:43:48.473322 containerd[1552]: time="2025-11-24T00:43:48.473283347Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 24 00:43:48.489169 containerd[1552]: time="2025-11-24T00:43:48.489046398Z" level=info msg="TearDown network for sandbox \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" successfully" Nov 24 00:43:48.489169 containerd[1552]: time="2025-11-24T00:43:48.489079888Z" level=info msg="StopPodSandbox for \"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" returns successfully" Nov 24 00:43:48.491078 containerd[1552]: time="2025-11-24T00:43:48.491017813Z" level=info msg="received sandbox container exit event sandbox_id:\"cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1\" exit_status:137 exited_at:{seconds:1763945028 nanos:430235974}" monitor_name=criService Nov 24 00:43:48.511112 kubelet[2717]: I1124 00:43:48.511069 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv5vm\" (UniqueName: \"kubernetes.io/projected/57089cda-6f65-4772-9a4c-a21d8b0b060c-kube-api-access-gv5vm\") pod \"57089cda-6f65-4772-9a4c-a21d8b0b060c\" (UID: \"57089cda-6f65-4772-9a4c-a21d8b0b060c\") " Nov 24 00:43:48.511518 kubelet[2717]: I1124 00:43:48.511127 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57089cda-6f65-4772-9a4c-a21d8b0b060c-cilium-config-path\") pod \"57089cda-6f65-4772-9a4c-a21d8b0b060c\" (UID: \"57089cda-6f65-4772-9a4c-a21d8b0b060c\") " Nov 24 00:43:48.515517 kubelet[2717]: I1124 00:43:48.515431 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57089cda-6f65-4772-9a4c-a21d8b0b060c-kube-api-access-gv5vm" (OuterVolumeSpecName: "kube-api-access-gv5vm") pod "57089cda-6f65-4772-9a4c-a21d8b0b060c" (UID: "57089cda-6f65-4772-9a4c-a21d8b0b060c"). InnerVolumeSpecName "kube-api-access-gv5vm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:43:48.517436 kubelet[2717]: I1124 00:43:48.517401 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57089cda-6f65-4772-9a4c-a21d8b0b060c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "57089cda-6f65-4772-9a4c-a21d8b0b060c" (UID: "57089cda-6f65-4772-9a4c-a21d8b0b060c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:43:48.611379 kubelet[2717]: I1124 00:43:48.611328 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-xtables-lock\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611379 kubelet[2717]: I1124 00:43:48.611378 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-config-path\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611379 kubelet[2717]: I1124 00:43:48.611402 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt9mn\" (UniqueName: \"kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-kube-api-access-jt9mn\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611720 kubelet[2717]: I1124 00:43:48.611432 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hostproc\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611720 kubelet[2717]: I1124 00:43:48.611448 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-lib-modules\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611720 kubelet[2717]: I1124 00:43:48.611464 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-host-proc-sys-net\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611720 kubelet[2717]: I1124 00:43:48.611480 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hubble-tls\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611720 kubelet[2717]: I1124 00:43:48.611499 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86fa2431-929e-4af1-bca3-4df3a5ec27d2-clustermesh-secrets\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611720 kubelet[2717]: I1124 00:43:48.611516 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-etc-cni-netd\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611890 kubelet[2717]: I1124 00:43:48.611534 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-cgroup\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611890 kubelet[2717]: I1124 00:43:48.611548 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-run\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611890 kubelet[2717]: I1124 00:43:48.611568 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-bpf-maps\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611890 kubelet[2717]: I1124 00:43:48.611582 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cni-path\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611890 kubelet[2717]: I1124 00:43:48.611597 2717 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-host-proc-sys-kernel\") pod \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\" (UID: \"86fa2431-929e-4af1-bca3-4df3a5ec27d2\") " Nov 24 00:43:48.611890 kubelet[2717]: I1124 00:43:48.611638 2717 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gv5vm\" (UniqueName: \"kubernetes.io/projected/57089cda-6f65-4772-9a4c-a21d8b0b060c-kube-api-access-gv5vm\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.612040 kubelet[2717]: I1124 00:43:48.611998 2717 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57089cda-6f65-4772-9a4c-a21d8b0b060c-cilium-config-path\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.612834 kubelet[2717]: I1124 00:43:48.612063 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.612834 kubelet[2717]: I1124 00:43:48.612115 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.615555 kubelet[2717]: I1124 00:43:48.615445 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:43:48.616764 kubelet[2717]: I1124 00:43:48.616731 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.616820 kubelet[2717]: I1124 00:43:48.616765 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.616820 kubelet[2717]: I1124 00:43:48.616783 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.618006 kubelet[2717]: I1124 00:43:48.617942 2717 scope.go:117] "RemoveContainer" containerID="ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1" Nov 24 00:43:48.623385 kubelet[2717]: I1124 00:43:48.623277 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.623809 kubelet[2717]: I1124 00:43:48.623664 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.623809 kubelet[2717]: I1124 00:43:48.623747 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.623917 kubelet[2717]: I1124 00:43:48.623782 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.624264 kubelet[2717]: I1124 00:43:48.623993 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 24 00:43:48.627114 kubelet[2717]: I1124 00:43:48.627058 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:43:48.627155 systemd[1]: Removed slice kubepods-besteffort-pod57089cda_6f65_4772_9a4c_a21d8b0b060c.slice - libcontainer container kubepods-besteffort-pod57089cda_6f65_4772_9a4c_a21d8b0b060c.slice. Nov 24 00:43:48.629488 kubelet[2717]: I1124 00:43:48.628612 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-kube-api-access-jt9mn" (OuterVolumeSpecName: "kube-api-access-jt9mn") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "kube-api-access-jt9mn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:43:48.632607 containerd[1552]: time="2025-11-24T00:43:48.632331532Z" level=info msg="RemoveContainer for \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\"" Nov 24 00:43:48.635790 kubelet[2717]: I1124 00:43:48.635746 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86fa2431-929e-4af1-bca3-4df3a5ec27d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "86fa2431-929e-4af1-bca3-4df3a5ec27d2" (UID: "86fa2431-929e-4af1-bca3-4df3a5ec27d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:43:48.638795 containerd[1552]: time="2025-11-24T00:43:48.638758123Z" level=info msg="RemoveContainer for \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" returns successfully" Nov 24 00:43:48.639221 kubelet[2717]: I1124 00:43:48.639132 2717 scope.go:117] "RemoveContainer" containerID="ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1" Nov 24 00:43:48.639658 containerd[1552]: time="2025-11-24T00:43:48.639606107Z" level=error msg="ContainerStatus for \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\": not found" Nov 24 00:43:48.639841 kubelet[2717]: E1124 00:43:48.639771 2717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\": not found" containerID="ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1" Nov 24 00:43:48.639841 kubelet[2717]: I1124 00:43:48.639802 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1"} err="failed to get container status \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac9ad192f435a63f9a8a8c077de527efca95b8ac069697433c484afcde43d4e1\": not found" Nov 24 00:43:48.639841 kubelet[2717]: I1124 00:43:48.639837 2717 scope.go:117] "RemoveContainer" containerID="4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b" Nov 24 00:43:48.641567 containerd[1552]: time="2025-11-24T00:43:48.641532202Z" level=info msg="RemoveContainer for \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\"" Nov 24 00:43:48.647928 containerd[1552]: time="2025-11-24T00:43:48.647889344Z" level=info msg="RemoveContainer for \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" returns successfully" Nov 24 00:43:48.648086 kubelet[2717]: I1124 00:43:48.648036 2717 scope.go:117] "RemoveContainer" containerID="b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39" Nov 24 00:43:48.650267 containerd[1552]: time="2025-11-24T00:43:48.650193407Z" level=info msg="RemoveContainer for \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\"" Nov 24 00:43:48.655582 containerd[1552]: time="2025-11-24T00:43:48.655556496Z" level=info msg="RemoveContainer for \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\" returns successfully" Nov 24 00:43:48.656721 kubelet[2717]: I1124 00:43:48.656704 2717 scope.go:117] "RemoveContainer" containerID="244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a" Nov 24 00:43:48.663497 containerd[1552]: time="2025-11-24T00:43:48.663290938Z" level=info msg="RemoveContainer for \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\"" Nov 24 00:43:48.670267 containerd[1552]: time="2025-11-24T00:43:48.670212265Z" level=info msg="RemoveContainer for \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\" returns successfully" Nov 24 00:43:48.670447 kubelet[2717]: I1124 00:43:48.670418 2717 scope.go:117] "RemoveContainer" containerID="7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82" Nov 24 00:43:48.672057 containerd[1552]: time="2025-11-24T00:43:48.672017591Z" level=info msg="RemoveContainer for \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\"" Nov 24 00:43:48.674921 containerd[1552]: time="2025-11-24T00:43:48.674894580Z" level=info msg="RemoveContainer for \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\" returns successfully" Nov 24 00:43:48.675220 kubelet[2717]: I1124 00:43:48.675186 2717 scope.go:117] "RemoveContainer" containerID="0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8" Nov 24 00:43:48.677167 containerd[1552]: time="2025-11-24T00:43:48.677015324Z" level=info msg="RemoveContainer for \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\"" Nov 24 00:43:48.680011 containerd[1552]: time="2025-11-24T00:43:48.679990601Z" level=info msg="RemoveContainer for \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\" returns successfully" Nov 24 00:43:48.680189 kubelet[2717]: I1124 00:43:48.680149 2717 scope.go:117] "RemoveContainer" containerID="4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b" Nov 24 00:43:48.680507 containerd[1552]: time="2025-11-24T00:43:48.680466877Z" level=error msg="ContainerStatus for \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\": not found" Nov 24 00:43:48.680666 kubelet[2717]: E1124 00:43:48.680643 2717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\": not found" containerID="4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b" Nov 24 00:43:48.681098 kubelet[2717]: I1124 00:43:48.680958 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b"} err="failed to get container status \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a7be0739a7f8c6cd357c547ca7bad73aadcc04115204dc3106eab0f1cedde5b\": not found" Nov 24 00:43:48.681098 kubelet[2717]: I1124 00:43:48.680986 2717 scope.go:117] "RemoveContainer" containerID="b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39" Nov 24 00:43:48.681203 containerd[1552]: time="2025-11-24T00:43:48.681164082Z" level=error msg="ContainerStatus for \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\": not found" Nov 24 00:43:48.681322 kubelet[2717]: E1124 00:43:48.681294 2717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\": not found" containerID="b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39" Nov 24 00:43:48.681372 kubelet[2717]: I1124 00:43:48.681329 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39"} err="failed to get container status \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\": rpc error: code = NotFound desc = an error occurred when try to find container \"b22f45ff959eb4dafcc923e1c071ee055ab2c8cc5b9292a064975561069f6e39\": not found" Nov 24 00:43:48.681372 kubelet[2717]: I1124 00:43:48.681353 2717 scope.go:117] "RemoveContainer" containerID="244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a" Nov 24 00:43:48.681583 containerd[1552]: time="2025-11-24T00:43:48.681553279Z" level=error msg="ContainerStatus for \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\": not found" Nov 24 00:43:48.682013 kubelet[2717]: E1124 00:43:48.681886 2717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\": not found" containerID="244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a" Nov 24 00:43:48.682013 kubelet[2717]: I1124 00:43:48.681908 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a"} err="failed to get container status \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\": rpc error: code = NotFound desc = an error occurred when try to find container \"244524269f01e3bc0e60cdaed35751b3f9151c4b970028cbd45733ba4ec1670a\": not found" Nov 24 00:43:48.682013 kubelet[2717]: I1124 00:43:48.681922 2717 scope.go:117] "RemoveContainer" containerID="7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82" Nov 24 00:43:48.682320 containerd[1552]: time="2025-11-24T00:43:48.682227584Z" level=error msg="ContainerStatus for \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\": not found" Nov 24 00:43:48.682399 kubelet[2717]: E1124 00:43:48.682376 2717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\": not found" containerID="7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82" Nov 24 00:43:48.682430 kubelet[2717]: I1124 00:43:48.682400 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82"} err="failed to get container status \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d77ac78d0e0ab028e92dffdf319f1dc22a2c6a2febd9429764f2f7b3e44ba82\": not found" Nov 24 00:43:48.682430 kubelet[2717]: I1124 00:43:48.682419 2717 scope.go:117] "RemoveContainer" containerID="0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8" Nov 24 00:43:48.682594 containerd[1552]: time="2025-11-24T00:43:48.682549172Z" level=error msg="ContainerStatus for \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\": not found" Nov 24 00:43:48.683010 kubelet[2717]: E1124 00:43:48.682942 2717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\": not found" containerID="0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8" Nov 24 00:43:48.683121 kubelet[2717]: I1124 00:43:48.683012 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8"} err="failed to get container status \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c4d7e6b4dd3370444d4e8951fda6f8220492c6510ecb0064035ff6addfcdaf8\": not found" Nov 24 00:43:48.712292 kubelet[2717]: I1124 00:43:48.712254 2717 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jt9mn\" (UniqueName: \"kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-kube-api-access-jt9mn\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712348 kubelet[2717]: I1124 00:43:48.712308 2717 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hostproc\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712348 kubelet[2717]: I1124 00:43:48.712326 2717 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-lib-modules\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712348 kubelet[2717]: I1124 00:43:48.712337 2717 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-host-proc-sys-net\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712348 kubelet[2717]: I1124 00:43:48.712349 2717 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/86fa2431-929e-4af1-bca3-4df3a5ec27d2-hubble-tls\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712458 kubelet[2717]: I1124 00:43:48.712361 2717 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/86fa2431-929e-4af1-bca3-4df3a5ec27d2-clustermesh-secrets\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712458 kubelet[2717]: I1124 00:43:48.712372 2717 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-etc-cni-netd\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712458 kubelet[2717]: I1124 00:43:48.712381 2717 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-cgroup\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712458 kubelet[2717]: I1124 00:43:48.712390 2717 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-run\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712458 kubelet[2717]: I1124 00:43:48.712400 2717 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-bpf-maps\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712458 kubelet[2717]: I1124 00:43:48.712408 2717 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cni-path\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712458 kubelet[2717]: I1124 00:43:48.712420 2717 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-host-proc-sys-kernel\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712458 kubelet[2717]: I1124 00:43:48.712430 2717 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86fa2431-929e-4af1-bca3-4df3a5ec27d2-xtables-lock\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.712641 kubelet[2717]: I1124 00:43:48.712439 2717 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/86fa2431-929e-4af1-bca3-4df3a5ec27d2-cilium-config-path\") on node \"172-238-170-212\" DevicePath \"\"" Nov 24 00:43:48.940175 systemd[1]: Removed slice kubepods-burstable-pod86fa2431_929e_4af1_bca3_4df3a5ec27d2.slice - libcontainer container kubepods-burstable-pod86fa2431_929e_4af1_bca3_4df3a5ec27d2.slice. Nov 24 00:43:48.940276 systemd[1]: kubepods-burstable-pod86fa2431_929e_4af1_bca3_4df3a5ec27d2.slice: Consumed 6.592s CPU time, 125.5M memory peak, 136K read from disk, 13.3M written to disk. Nov 24 00:43:49.206935 kubelet[2717]: E1124 00:43:49.202521 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:49.207835 kubelet[2717]: I1124 00:43:49.207711 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57089cda-6f65-4772-9a4c-a21d8b0b060c" path="/var/lib/kubelet/pods/57089cda-6f65-4772-9a4c-a21d8b0b060c/volumes" Nov 24 00:43:49.208518 kubelet[2717]: I1124 00:43:49.208288 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fa2431-929e-4af1-bca3-4df3a5ec27d2" path="/var/lib/kubelet/pods/86fa2431-929e-4af1-bca3-4df3a5ec27d2/volumes" Nov 24 00:43:49.308727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cac177301b7e15723b48611e4fde438fe8aeabbb4c72bffef6696febea19b4e1-shm.mount: Deactivated successfully. Nov 24 00:43:49.309521 systemd[1]: var-lib-kubelet-pods-86fa2431\x2d929e\x2d4af1\x2dbca3\x2d4df3a5ec27d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 24 00:43:49.311396 systemd[1]: var-lib-kubelet-pods-86fa2431\x2d929e\x2d4af1\x2dbca3\x2d4df3a5ec27d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 24 00:43:49.311698 systemd[1]: var-lib-kubelet-pods-57089cda\x2d6f65\x2d4772\x2d9a4c\x2da21d8b0b060c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgv5vm.mount: Deactivated successfully. Nov 24 00:43:49.311782 systemd[1]: var-lib-kubelet-pods-86fa2431\x2d929e\x2d4af1\x2dbca3\x2d4df3a5ec27d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djt9mn.mount: Deactivated successfully. Nov 24 00:43:49.326655 kubelet[2717]: E1124 00:43:49.326614 2717 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 24 00:43:50.212850 sshd[4259]: Connection closed by 147.75.109.163 port 43464 Nov 24 00:43:50.213565 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:50.219564 systemd[1]: sshd@20-172.238.170.212:22-147.75.109.163:43464.service: Deactivated successfully. Nov 24 00:43:50.221954 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:43:50.223370 systemd-logind[1528]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:43:50.225046 systemd-logind[1528]: Removed session 21. Nov 24 00:43:50.276890 systemd[1]: Started sshd@21-172.238.170.212:22-147.75.109.163:43472.service - OpenSSH per-connection server daemon (147.75.109.163:43472). Nov 24 00:43:50.620647 sshd[4402]: Accepted publickey for core from 147.75.109.163 port 43472 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:50.622290 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:50.630964 systemd-logind[1528]: New session 22 of user core. Nov 24 00:43:50.640830 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:43:51.336578 sshd[4405]: Connection closed by 147.75.109.163 port 43472 Nov 24 00:43:51.337971 sshd-session[4402]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:51.346532 systemd[1]: sshd@21-172.238.170.212:22-147.75.109.163:43472.service: Deactivated successfully. Nov 24 00:43:51.347060 systemd-logind[1528]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:43:51.352245 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:43:51.363174 systemd-logind[1528]: Removed session 22. Nov 24 00:43:51.365715 systemd[1]: Created slice kubepods-burstable-podc6d87f28_3917_4011_8382_f0f766109d37.slice - libcontainer container kubepods-burstable-podc6d87f28_3917_4011_8382_f0f766109d37.slice. Nov 24 00:43:51.401400 systemd[1]: Started sshd@22-172.238.170.212:22-147.75.109.163:46510.service - OpenSSH per-connection server daemon (147.75.109.163:46510). Nov 24 00:43:51.429811 kubelet[2717]: I1124 00:43:51.429751 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-etc-cni-netd\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430194 kubelet[2717]: I1124 00:43:51.429820 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-cni-path\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430194 kubelet[2717]: I1124 00:43:51.429879 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-host-proc-sys-net\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430194 kubelet[2717]: I1124 00:43:51.429900 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-xtables-lock\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430194 kubelet[2717]: I1124 00:43:51.429916 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-host-proc-sys-kernel\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430194 kubelet[2717]: I1124 00:43:51.429951 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-cilium-run\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430194 kubelet[2717]: I1124 00:43:51.429965 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-bpf-maps\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430333 kubelet[2717]: I1124 00:43:51.429979 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-cilium-cgroup\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430333 kubelet[2717]: I1124 00:43:51.429993 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-lib-modules\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430333 kubelet[2717]: I1124 00:43:51.430028 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6d87f28-3917-4011-8382-f0f766109d37-cilium-config-path\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430333 kubelet[2717]: I1124 00:43:51.430054 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6d87f28-3917-4011-8382-f0f766109d37-hubble-tls\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430333 kubelet[2717]: I1124 00:43:51.430072 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84rr\" (UniqueName: \"kubernetes.io/projected/c6d87f28-3917-4011-8382-f0f766109d37-kube-api-access-m84rr\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430333 kubelet[2717]: I1124 00:43:51.430105 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6d87f28-3917-4011-8382-f0f766109d37-hostproc\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430450 kubelet[2717]: I1124 00:43:51.430123 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6d87f28-3917-4011-8382-f0f766109d37-clustermesh-secrets\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.430450 kubelet[2717]: I1124 00:43:51.430137 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6d87f28-3917-4011-8382-f0f766109d37-cilium-ipsec-secrets\") pod \"cilium-hmzp2\" (UID: \"c6d87f28-3917-4011-8382-f0f766109d37\") " pod="kube-system/cilium-hmzp2" Nov 24 00:43:51.670922 kubelet[2717]: E1124 00:43:51.670773 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:51.672813 containerd[1552]: time="2025-11-24T00:43:51.671505700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hmzp2,Uid:c6d87f28-3917-4011-8382-f0f766109d37,Namespace:kube-system,Attempt:0,}" Nov 24 00:43:51.692255 containerd[1552]: time="2025-11-24T00:43:51.692155918Z" level=info msg="connecting to shim 5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e" address="unix:///run/containerd/s/95889c35396b35241855e398681a73c3febbd8c1fa63f35b1d7654d80e55d503" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:43:51.724826 systemd[1]: Started cri-containerd-5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e.scope - libcontainer container 5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e. Nov 24 00:43:51.737277 sshd[4416]: Accepted publickey for core from 147.75.109.163 port 46510 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:51.739358 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:51.747351 systemd-logind[1528]: New session 23 of user core. Nov 24 00:43:51.751978 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:43:51.779456 containerd[1552]: time="2025-11-24T00:43:51.779413786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hmzp2,Uid:c6d87f28-3917-4011-8382-f0f766109d37,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\"" Nov 24 00:43:51.780915 kubelet[2717]: E1124 00:43:51.780883 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:51.785622 containerd[1552]: time="2025-11-24T00:43:51.785578561Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 24 00:43:51.797421 containerd[1552]: time="2025-11-24T00:43:51.795904015Z" level=info msg="Container de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:43:51.802922 containerd[1552]: time="2025-11-24T00:43:51.802897873Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8\"" Nov 24 00:43:51.803901 containerd[1552]: time="2025-11-24T00:43:51.803866576Z" level=info msg="StartContainer for \"de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8\"" Nov 24 00:43:51.806301 containerd[1552]: time="2025-11-24T00:43:51.806269708Z" level=info msg="connecting to shim de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8" address="unix:///run/containerd/s/95889c35396b35241855e398681a73c3febbd8c1fa63f35b1d7654d80e55d503" protocol=ttrpc version=3 Nov 24 00:43:51.828041 systemd[1]: Started cri-containerd-de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8.scope - libcontainer container de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8. Nov 24 00:43:51.865842 containerd[1552]: time="2025-11-24T00:43:51.865810290Z" level=info msg="StartContainer for \"de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8\" returns successfully" Nov 24 00:43:51.878240 systemd[1]: cri-containerd-de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8.scope: Deactivated successfully. Nov 24 00:43:51.882413 containerd[1552]: time="2025-11-24T00:43:51.882372858Z" level=info msg="received container exit event container_id:\"de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8\" id:\"de90a52e2ba05d08de04688f6b37f8ed0bbdfcb4a3c15b3fb7e7635c8b7cd0c8\" pid:4484 exited_at:{seconds:1763945031 nanos:882140340}" Nov 24 00:43:51.972005 sshd[4464]: Connection closed by 147.75.109.163 port 46510 Nov 24 00:43:51.973505 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Nov 24 00:43:51.979054 systemd-logind[1528]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:43:51.979912 systemd[1]: sshd@22-172.238.170.212:22-147.75.109.163:46510.service: Deactivated successfully. Nov 24 00:43:51.983083 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:43:51.985319 systemd-logind[1528]: Removed session 23. Nov 24 00:43:51.990544 kubelet[2717]: I1124 00:43:51.990383 2717 setters.go:618] "Node became not ready" node="172-238-170-212" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T00:43:51Z","lastTransitionTime":"2025-11-24T00:43:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 24 00:43:52.042729 systemd[1]: Started sshd@23-172.238.170.212:22-147.75.109.163:46526.service - OpenSSH per-connection server daemon (147.75.109.163:46526). Nov 24 00:43:52.386354 sshd[4522]: Accepted publickey for core from 147.75.109.163 port 46526 ssh2: RSA SHA256:bUH+mE6XbQFzPBDvrvhZzxHcM5Zp0YDa2/IKAdw37Vc Nov 24 00:43:52.388758 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:43:52.395665 systemd-logind[1528]: New session 24 of user core. Nov 24 00:43:52.403825 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 00:43:52.543229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878993867.mount: Deactivated successfully. Nov 24 00:43:52.650862 kubelet[2717]: E1124 00:43:52.650601 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:52.654418 containerd[1552]: time="2025-11-24T00:43:52.654360590Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 24 00:43:52.663404 containerd[1552]: time="2025-11-24T00:43:52.663033967Z" level=info msg="Container 6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:43:52.672426 containerd[1552]: time="2025-11-24T00:43:52.672394568Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf\"" Nov 24 00:43:52.674161 containerd[1552]: time="2025-11-24T00:43:52.674138286Z" level=info msg="StartContainer for \"6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf\"" Nov 24 00:43:52.676633 containerd[1552]: time="2025-11-24T00:43:52.676524718Z" level=info msg="connecting to shim 6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf" address="unix:///run/containerd/s/95889c35396b35241855e398681a73c3febbd8c1fa63f35b1d7654d80e55d503" protocol=ttrpc version=3 Nov 24 00:43:52.700817 systemd[1]: Started cri-containerd-6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf.scope - libcontainer container 6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf. Nov 24 00:43:52.734836 containerd[1552]: time="2025-11-24T00:43:52.734621575Z" level=info msg="StartContainer for \"6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf\" returns successfully" Nov 24 00:43:52.743871 systemd[1]: cri-containerd-6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf.scope: Deactivated successfully. Nov 24 00:43:52.745585 containerd[1552]: time="2025-11-24T00:43:52.744317924Z" level=info msg="received container exit event container_id:\"6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf\" id:\"6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf\" pid:4545 exited_at:{seconds:1763945032 nanos:744127665}" Nov 24 00:43:52.767421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6de11bf5e0e4f1db04f2423439972e4e2457df617e7a6d5ca11dbf51e63ed8bf-rootfs.mount: Deactivated successfully. Nov 24 00:43:53.656188 kubelet[2717]: E1124 00:43:53.655668 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:53.661852 containerd[1552]: time="2025-11-24T00:43:53.661807497Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 24 00:43:53.674703 containerd[1552]: time="2025-11-24T00:43:53.673013996Z" level=info msg="Container ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:43:53.686907 containerd[1552]: time="2025-11-24T00:43:53.686864316Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd\"" Nov 24 00:43:53.687595 containerd[1552]: time="2025-11-24T00:43:53.687560291Z" level=info msg="StartContainer for \"ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd\"" Nov 24 00:43:53.688832 containerd[1552]: time="2025-11-24T00:43:53.688801712Z" level=info msg="connecting to shim ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd" address="unix:///run/containerd/s/95889c35396b35241855e398681a73c3febbd8c1fa63f35b1d7654d80e55d503" protocol=ttrpc version=3 Nov 24 00:43:53.718959 systemd[1]: Started cri-containerd-ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd.scope - libcontainer container ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd. Nov 24 00:43:53.795051 containerd[1552]: time="2025-11-24T00:43:53.794965474Z" level=info msg="StartContainer for \"ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd\" returns successfully" Nov 24 00:43:53.801949 systemd[1]: cri-containerd-ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd.scope: Deactivated successfully. Nov 24 00:43:53.807099 containerd[1552]: time="2025-11-24T00:43:53.807053337Z" level=info msg="received container exit event container_id:\"ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd\" id:\"ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd\" pid:4588 exited_at:{seconds:1763945033 nanos:805967395}" Nov 24 00:43:53.837740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca8ae77b2868b321208edb9e3a1eaa56495e6dbd3def57cc5ec768fbc875d2fd-rootfs.mount: Deactivated successfully. Nov 24 00:43:54.201481 kubelet[2717]: E1124 00:43:54.201410 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:54.328430 kubelet[2717]: E1124 00:43:54.328342 2717 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 24 00:43:54.667206 kubelet[2717]: E1124 00:43:54.667152 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:54.675701 containerd[1552]: time="2025-11-24T00:43:54.675037707Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 24 00:43:54.689712 containerd[1552]: time="2025-11-24T00:43:54.688984207Z" level=info msg="Container 9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:43:54.701725 containerd[1552]: time="2025-11-24T00:43:54.701129880Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462\"" Nov 24 00:43:54.704291 containerd[1552]: time="2025-11-24T00:43:54.704242468Z" level=info msg="StartContainer for \"9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462\"" Nov 24 00:43:54.709254 containerd[1552]: time="2025-11-24T00:43:54.709208782Z" level=info msg="connecting to shim 9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462" address="unix:///run/containerd/s/95889c35396b35241855e398681a73c3febbd8c1fa63f35b1d7654d80e55d503" protocol=ttrpc version=3 Nov 24 00:43:54.739825 systemd[1]: Started cri-containerd-9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462.scope - libcontainer container 9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462. Nov 24 00:43:54.783030 systemd[1]: cri-containerd-9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462.scope: Deactivated successfully. Nov 24 00:43:54.784734 containerd[1552]: time="2025-11-24T00:43:54.783557629Z" level=info msg="received container exit event container_id:\"9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462\" id:\"9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462\" pid:4629 exited_at:{seconds:1763945034 nanos:783374711}" Nov 24 00:43:54.785847 containerd[1552]: time="2025-11-24T00:43:54.785810683Z" level=info msg="StartContainer for \"9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462\" returns successfully" Nov 24 00:43:54.818010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9262bd5e68f1a80b043333ac27ca023d7da4beb364796b13bbf5edae95ab5462-rootfs.mount: Deactivated successfully. Nov 24 00:43:55.202142 kubelet[2717]: E1124 00:43:55.201475 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:55.672430 kubelet[2717]: E1124 00:43:55.672350 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:55.676902 containerd[1552]: time="2025-11-24T00:43:55.676843661Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 24 00:43:55.692878 containerd[1552]: time="2025-11-24T00:43:55.692765748Z" level=info msg="Container 1f2d720ea467a44e1b7316b8fd5d58dd205716e984efa1bee32d3bdf55b4fa35: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:43:55.701363 containerd[1552]: time="2025-11-24T00:43:55.701319697Z" level=info msg="CreateContainer within sandbox \"5c6c0c4f3842dac12e6eb8981374ce2683834dc79abdde8acbf3fb01460d742e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1f2d720ea467a44e1b7316b8fd5d58dd205716e984efa1bee32d3bdf55b4fa35\"" Nov 24 00:43:55.701919 containerd[1552]: time="2025-11-24T00:43:55.701900283Z" level=info msg="StartContainer for \"1f2d720ea467a44e1b7316b8fd5d58dd205716e984efa1bee32d3bdf55b4fa35\"" Nov 24 00:43:55.702885 containerd[1552]: time="2025-11-24T00:43:55.702807117Z" level=info msg="connecting to shim 1f2d720ea467a44e1b7316b8fd5d58dd205716e984efa1bee32d3bdf55b4fa35" address="unix:///run/containerd/s/95889c35396b35241855e398681a73c3febbd8c1fa63f35b1d7654d80e55d503" protocol=ttrpc version=3 Nov 24 00:43:55.723815 systemd[1]: Started cri-containerd-1f2d720ea467a44e1b7316b8fd5d58dd205716e984efa1bee32d3bdf55b4fa35.scope - libcontainer container 1f2d720ea467a44e1b7316b8fd5d58dd205716e984efa1bee32d3bdf55b4fa35. Nov 24 00:43:55.790147 containerd[1552]: time="2025-11-24T00:43:55.790103947Z" level=info msg="StartContainer for \"1f2d720ea467a44e1b7316b8fd5d58dd205716e984efa1bee32d3bdf55b4fa35\" returns successfully" Nov 24 00:43:56.278725 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 24 00:43:56.679366 kubelet[2717]: E1124 00:43:56.678835 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:56.695707 kubelet[2717]: I1124 00:43:56.695388 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hmzp2" podStartSLOduration=5.695368229 podStartE2EDuration="5.695368229s" podCreationTimestamp="2025-11-24 00:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:43:56.695335749 +0000 UTC m=+167.596667201" watchObservedRunningTime="2025-11-24 00:43:56.695368229 +0000 UTC m=+167.596699681" Nov 24 00:43:57.680997 kubelet[2717]: E1124 00:43:57.680955 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:59.303904 systemd-networkd[1426]: lxc_health: Link UP Nov 24 00:43:59.304254 systemd-networkd[1426]: lxc_health: Gained carrier Nov 24 00:43:59.675036 kubelet[2717]: E1124 00:43:59.674640 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:43:59.692704 kubelet[2717]: E1124 00:43:59.691215 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:44:00.554817 systemd-networkd[1426]: lxc_health: Gained IPv6LL Nov 24 00:44:00.691724 kubelet[2717]: E1124 00:44:00.691610 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:44:05.450597 sshd[4525]: Connection closed by 147.75.109.163 port 46526 Nov 24 00:44:05.451876 sshd-session[4522]: pam_unix(sshd:session): session closed for user core Nov 24 00:44:05.456609 systemd[1]: sshd@23-172.238.170.212:22-147.75.109.163:46526.service: Deactivated successfully. Nov 24 00:44:05.459422 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 00:44:05.461697 systemd-logind[1528]: Session 24 logged out. Waiting for processes to exit. Nov 24 00:44:05.463599 systemd-logind[1528]: Removed session 24. Nov 24 00:44:06.201575 kubelet[2717]: E1124 00:44:06.201418 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 24 00:44:06.201575 kubelet[2717]: E1124 00:44:06.201419 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22"