Nov 4 04:55:37.246323 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 03:00:51 -00 2025 Nov 4 04:55:37.246350 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:55:37.246360 kernel: BIOS-provided physical RAM map: Nov 4 04:55:37.246366 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 4 04:55:37.246373 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 4 04:55:37.246379 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 04:55:37.246388 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 4 04:55:37.246395 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 4 04:55:37.246401 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 4 04:55:37.246407 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 4 04:55:37.246414 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 04:55:37.246420 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 04:55:37.246426 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 4 04:55:37.246433 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 04:55:37.246443 kernel: NX (Execute Disable) protection: active Nov 4 04:55:37.246449 kernel: APIC: Static calls initialized Nov 4 04:55:37.246456 kernel: SMBIOS 2.8 present. Nov 4 04:55:37.246463 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 4 04:55:37.246469 kernel: DMI: Memory slots populated: 1/1 Nov 4 04:55:37.246478 kernel: Hypervisor detected: KVM Nov 4 04:55:37.246485 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 4 04:55:37.246492 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 04:55:37.246498 kernel: kvm-clock: using sched offset of 6170451730 cycles Nov 4 04:55:37.246506 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 04:55:37.246513 kernel: tsc: Detected 2000.000 MHz processor Nov 4 04:55:37.246520 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 04:55:37.246528 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 04:55:37.246537 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 4 04:55:37.246545 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 04:55:37.246552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 04:55:37.246559 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 4 04:55:37.246566 kernel: Using GB pages for direct mapping Nov 4 04:55:37.246573 kernel: ACPI: Early table checksum verification disabled Nov 4 04:55:37.246580 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 4 04:55:37.246587 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:37.246596 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:37.246603 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:37.246611 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 4 04:55:37.246618 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:37.246625 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:37.246635 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:37.246644 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 04:55:37.246652 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 4 04:55:37.246659 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 4 04:55:37.246667 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 4 04:55:37.246674 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 4 04:55:37.246683 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 4 04:55:37.246691 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 4 04:55:37.246698 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 4 04:55:37.246705 kernel: No NUMA configuration found Nov 4 04:55:37.246713 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 4 04:55:37.246720 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Nov 4 04:55:37.246727 kernel: Zone ranges: Nov 4 04:55:37.246737 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 04:55:37.246744 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 4 04:55:37.246751 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 4 04:55:37.246758 kernel: Device empty Nov 4 04:55:37.246766 kernel: Movable zone start for each node Nov 4 04:55:37.246797 kernel: Early memory node ranges Nov 4 04:55:37.246806 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 04:55:37.246814 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 4 04:55:37.246825 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 4 04:55:37.246833 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 4 04:55:37.246840 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 04:55:37.246847 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 04:55:37.246855 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 4 04:55:37.246862 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 04:55:37.246870 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 04:55:37.246879 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 04:55:37.246887 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 04:55:37.246894 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 04:55:37.246902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 04:55:37.246909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 04:55:37.246916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 04:55:37.246923 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 04:55:37.246933 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 04:55:37.246940 kernel: TSC deadline timer available Nov 4 04:55:37.246948 kernel: CPU topo: Max. logical packages: 1 Nov 4 04:55:37.246955 kernel: CPU topo: Max. logical dies: 1 Nov 4 04:55:37.246962 kernel: CPU topo: Max. dies per package: 1 Nov 4 04:55:37.246969 kernel: CPU topo: Max. threads per core: 1 Nov 4 04:55:37.246976 kernel: CPU topo: Num. cores per package: 2 Nov 4 04:55:37.246983 kernel: CPU topo: Num. threads per package: 2 Nov 4 04:55:37.246993 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 04:55:37.247000 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 04:55:37.247008 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 4 04:55:37.247015 kernel: kvm-guest: setup PV sched yield Nov 4 04:55:37.247022 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 4 04:55:37.247029 kernel: Booting paravirtualized kernel on KVM Nov 4 04:55:37.247037 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 04:55:37.247046 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 04:55:37.247054 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 04:55:37.247061 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 04:55:37.247068 kernel: pcpu-alloc: [0] 0 1 Nov 4 04:55:37.247075 kernel: kvm-guest: PV spinlocks enabled Nov 4 04:55:37.247082 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 4 04:55:37.247091 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:55:37.247100 kernel: random: crng init done Nov 4 04:55:37.247108 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 04:55:37.247115 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 04:55:37.247122 kernel: Fallback order for Node 0: 0 Nov 4 04:55:37.247130 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 4 04:55:37.247137 kernel: Policy zone: Normal Nov 4 04:55:37.247144 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 04:55:37.247154 kernel: software IO TLB: area num 2. Nov 4 04:55:37.247198 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 04:55:37.247212 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 04:55:37.247234 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 04:55:37.247247 kernel: Dynamic Preempt: voluntary Nov 4 04:55:37.247255 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 04:55:37.247263 kernel: rcu: RCU event tracing is enabled. Nov 4 04:55:37.247275 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 04:55:37.247283 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 04:55:37.248503 kernel: Rude variant of Tasks RCU enabled. Nov 4 04:55:37.248518 kernel: Tracing variant of Tasks RCU enabled. Nov 4 04:55:37.248526 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 04:55:37.248533 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 04:55:37.248544 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 04:55:37.248559 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 04:55:37.248566 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 04:55:37.248574 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 4 04:55:37.248584 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 04:55:37.248592 kernel: Console: colour VGA+ 80x25 Nov 4 04:55:37.248600 kernel: printk: legacy console [tty0] enabled Nov 4 04:55:37.248607 kernel: printk: legacy console [ttyS0] enabled Nov 4 04:55:37.248615 kernel: ACPI: Core revision 20240827 Nov 4 04:55:37.248625 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 04:55:37.248633 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 04:55:37.248640 kernel: x2apic enabled Nov 4 04:55:37.248648 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 04:55:37.248655 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 4 04:55:37.248663 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 4 04:55:37.248673 kernel: kvm-guest: setup PV IPIs Nov 4 04:55:37.248681 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 04:55:37.248689 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 4 04:55:37.248696 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Nov 4 04:55:37.248704 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 04:55:37.248711 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 04:55:37.248719 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 04:55:37.248729 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 04:55:37.248737 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 04:55:37.248744 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 04:55:37.248752 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 4 04:55:37.248759 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 04:55:37.248793 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 04:55:37.248808 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 4 04:55:37.248822 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 4 04:55:37.248830 kernel: active return thunk: srso_alias_return_thunk Nov 4 04:55:37.248838 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 4 04:55:37.248845 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 4 04:55:37.248853 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 4 04:55:37.248861 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 04:55:37.248868 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 04:55:37.248878 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 04:55:37.248886 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 4 04:55:37.248893 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 04:55:37.248901 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 4 04:55:37.248909 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 4 04:55:37.248916 kernel: Freeing SMP alternatives memory: 32K Nov 4 04:55:37.248924 kernel: pid_max: default: 32768 minimum: 301 Nov 4 04:55:37.248934 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 04:55:37.248941 kernel: landlock: Up and running. Nov 4 04:55:37.248949 kernel: SELinux: Initializing. Nov 4 04:55:37.248956 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 04:55:37.248964 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 04:55:37.248972 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 4 04:55:37.248980 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 04:55:37.248989 kernel: ... version: 0 Nov 4 04:55:37.248997 kernel: ... bit width: 48 Nov 4 04:55:37.249004 kernel: ... generic registers: 6 Nov 4 04:55:37.249012 kernel: ... value mask: 0000ffffffffffff Nov 4 04:55:37.249019 kernel: ... max period: 00007fffffffffff Nov 4 04:55:37.249027 kernel: ... fixed-purpose events: 0 Nov 4 04:55:37.249034 kernel: ... event mask: 000000000000003f Nov 4 04:55:37.249044 kernel: signal: max sigframe size: 3376 Nov 4 04:55:37.249051 kernel: rcu: Hierarchical SRCU implementation. Nov 4 04:55:37.249059 kernel: rcu: Max phase no-delay instances is 400. Nov 4 04:55:37.249067 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 04:55:37.249074 kernel: smp: Bringing up secondary CPUs ... Nov 4 04:55:37.249082 kernel: smpboot: x86: Booting SMP configuration: Nov 4 04:55:37.249089 kernel: .... node #0, CPUs: #1 Nov 4 04:55:37.249097 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 04:55:37.249106 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 4 04:55:37.249114 kernel: Memory: 3979480K/4193772K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15360K init, 2684K bss, 208864K reserved, 0K cma-reserved) Nov 4 04:55:37.249122 kernel: devtmpfs: initialized Nov 4 04:55:37.249129 kernel: x86/mm: Memory block size: 128MB Nov 4 04:55:37.249137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 04:55:37.249145 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 04:55:37.249152 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 04:55:37.249162 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 04:55:37.249169 kernel: audit: initializing netlink subsys (disabled) Nov 4 04:55:37.249177 kernel: audit: type=2000 audit(1762232133.482:1): state=initialized audit_enabled=0 res=1 Nov 4 04:55:37.249184 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 04:55:37.249192 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 04:55:37.249199 kernel: cpuidle: using governor menu Nov 4 04:55:37.249207 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 04:55:37.249216 kernel: dca service started, version 1.12.1 Nov 4 04:55:37.249224 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 4 04:55:37.249231 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 4 04:55:37.249239 kernel: PCI: Using configuration type 1 for base access Nov 4 04:55:37.249246 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 04:55:37.249254 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 04:55:37.249262 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 04:55:37.249271 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 04:55:37.249279 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 04:55:37.249287 kernel: ACPI: Added _OSI(Module Device) Nov 4 04:55:37.249294 kernel: ACPI: Added _OSI(Processor Device) Nov 4 04:55:37.249301 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 04:55:37.249309 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 04:55:37.249316 kernel: ACPI: Interpreter enabled Nov 4 04:55:37.249326 kernel: ACPI: PM: (supports S0 S3 S5) Nov 4 04:55:37.249333 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 04:55:37.249341 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 04:55:37.249348 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 04:55:37.249356 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 04:55:37.249363 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 04:55:37.249611 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 04:55:37.250232 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 04:55:37.250444 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 04:55:37.250456 kernel: PCI host bridge to bus 0000:00 Nov 4 04:55:37.250977 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 04:55:37.251163 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 04:55:37.251334 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 04:55:37.251498 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 4 04:55:37.251660 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 04:55:37.251902 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 4 04:55:37.252074 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 04:55:37.252270 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 04:55:37.252462 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 4 04:55:37.252746 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 4 04:55:37.253054 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 4 04:55:37.253243 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 4 04:55:37.253998 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 04:55:37.254345 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 4 04:55:37.254520 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 4 04:55:37.254816 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 4 04:55:37.255094 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 4 04:55:37.255314 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 4 04:55:37.255503 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 4 04:55:37.255688 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 4 04:55:37.256028 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 4 04:55:37.256245 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 4 04:55:37.256436 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 04:55:37.256615 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 04:55:37.256852 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 04:55:37.257045 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 4 04:55:37.257222 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 4 04:55:37.257408 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 04:55:37.257587 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 4 04:55:37.257599 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 04:55:37.257612 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 04:55:37.257621 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 04:55:37.257629 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 04:55:37.257637 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 04:55:37.257645 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 04:55:37.257653 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 04:55:37.257661 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 04:55:37.257672 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 04:55:37.257680 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 04:55:37.257688 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 04:55:37.257696 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 04:55:37.257705 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 04:55:37.257713 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 04:55:37.257721 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 04:55:37.257732 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 04:55:37.257740 kernel: iommu: Default domain type: Translated Nov 4 04:55:37.257749 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 04:55:37.257757 kernel: PCI: Using ACPI for IRQ routing Nov 4 04:55:37.257765 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 04:55:37.260456 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 4 04:55:37.260467 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 4 04:55:37.260675 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 04:55:37.260894 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 04:55:37.261079 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 04:55:37.261091 kernel: vgaarb: loaded Nov 4 04:55:37.261119 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 04:55:37.261128 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 04:55:37.261136 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 04:55:37.261149 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 04:55:37.261157 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 04:55:37.261165 kernel: pnp: PnP ACPI init Nov 4 04:55:37.261368 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 4 04:55:37.261381 kernel: pnp: PnP ACPI: found 5 devices Nov 4 04:55:37.261389 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 04:55:37.261401 kernel: NET: Registered PF_INET protocol family Nov 4 04:55:37.261408 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 04:55:37.261416 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 04:55:37.261424 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 04:55:37.261432 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 04:55:37.261439 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 04:55:37.261447 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 04:55:37.261457 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 04:55:37.261464 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 04:55:37.261472 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 04:55:37.261479 kernel: NET: Registered PF_XDP protocol family Nov 4 04:55:37.263932 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 04:55:37.264104 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 04:55:37.264267 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 04:55:37.264433 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 4 04:55:37.264593 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 4 04:55:37.264752 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 4 04:55:37.264762 kernel: PCI: CLS 0 bytes, default 64 Nov 4 04:55:37.264837 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 4 04:55:37.264849 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 4 04:55:37.264858 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 4 04:55:37.264870 kernel: Initialise system trusted keyrings Nov 4 04:55:37.264878 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 04:55:37.264886 kernel: Key type asymmetric registered Nov 4 04:55:37.264893 kernel: Asymmetric key parser 'x509' registered Nov 4 04:55:37.264901 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 04:55:37.264908 kernel: io scheduler mq-deadline registered Nov 4 04:55:37.264916 kernel: io scheduler kyber registered Nov 4 04:55:37.264926 kernel: io scheduler bfq registered Nov 4 04:55:37.264933 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 04:55:37.264941 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 04:55:37.264949 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 04:55:37.264957 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 04:55:37.264964 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 04:55:37.264972 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 04:55:37.264982 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 04:55:37.264989 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 04:55:37.264997 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 04:55:37.265195 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 4 04:55:37.265439 kernel: rtc_cmos 00:03: registered as rtc0 Nov 4 04:55:37.265615 kernel: rtc_cmos 00:03: setting system clock to 2025-11-04T04:55:35 UTC (1762232135) Nov 4 04:55:37.265812 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 4 04:55:37.265831 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 04:55:37.265839 kernel: NET: Registered PF_INET6 protocol family Nov 4 04:55:37.265847 kernel: Segment Routing with IPv6 Nov 4 04:55:37.265855 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 04:55:37.265862 kernel: NET: Registered PF_PACKET protocol family Nov 4 04:55:37.265870 kernel: Key type dns_resolver registered Nov 4 04:55:37.265878 kernel: IPI shorthand broadcast: enabled Nov 4 04:55:37.265888 kernel: sched_clock: Marking stable (1848004290, 362090050)->(2299883360, -89789020) Nov 4 04:55:37.265895 kernel: registered taskstats version 1 Nov 4 04:55:37.265903 kernel: Loading compiled-in X.509 certificates Nov 4 04:55:37.265911 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: dafbe857b8ef9eaad4381fdddb57853ce023547e' Nov 4 04:55:37.265918 kernel: Demotion targets for Node 0: null Nov 4 04:55:37.265927 kernel: Key type .fscrypt registered Nov 4 04:55:37.265934 kernel: Key type fscrypt-provisioning registered Nov 4 04:55:37.265944 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 04:55:37.265952 kernel: ima: Allocated hash algorithm: sha1 Nov 4 04:55:37.265960 kernel: ima: No architecture policies found Nov 4 04:55:37.265968 kernel: clk: Disabling unused clocks Nov 4 04:55:37.265976 kernel: Freeing unused kernel image (initmem) memory: 15360K Nov 4 04:55:37.265984 kernel: Write protecting the kernel read-only data: 45056k Nov 4 04:55:37.265991 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 4 04:55:37.266001 kernel: Run /init as init process Nov 4 04:55:37.266009 kernel: with arguments: Nov 4 04:55:37.266017 kernel: /init Nov 4 04:55:37.266024 kernel: with environment: Nov 4 04:55:37.266032 kernel: HOME=/ Nov 4 04:55:37.266054 kernel: TERM=linux Nov 4 04:55:37.266064 kernel: SCSI subsystem initialized Nov 4 04:55:37.266074 kernel: libata version 3.00 loaded. Nov 4 04:55:37.266262 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 04:55:37.266275 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 04:55:37.266449 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 04:55:37.266624 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 04:55:37.267632 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 04:55:37.268100 kernel: scsi host0: ahci Nov 4 04:55:37.268303 kernel: scsi host1: ahci Nov 4 04:55:37.271015 kernel: scsi host2: ahci Nov 4 04:55:37.271214 kernel: scsi host3: ahci Nov 4 04:55:37.271405 kernel: scsi host4: ahci Nov 4 04:55:37.271598 kernel: scsi host5: ahci Nov 4 04:55:37.271611 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Nov 4 04:55:37.271620 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Nov 4 04:55:37.271628 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Nov 4 04:55:37.271636 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Nov 4 04:55:37.271645 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Nov 4 04:55:37.271653 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Nov 4 04:55:37.271664 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:37.271672 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:37.271680 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:37.271688 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:37.271695 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:37.271703 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 4 04:55:37.272975 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 4 04:55:37.273178 kernel: scsi host6: Virtio SCSI HBA Nov 4 04:55:37.273390 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 4 04:55:37.273592 kernel: sd 6:0:0:0: Power-on or device reset occurred Nov 4 04:55:37.273819 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 4 04:55:37.274029 kernel: sd 6:0:0:0: [sda] Write Protect is off Nov 4 04:55:37.274229 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 4 04:55:37.274421 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 4 04:55:37.274433 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 04:55:37.274442 kernel: GPT:25804799 != 167739391 Nov 4 04:55:37.274450 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 04:55:37.274458 kernel: GPT:25804799 != 167739391 Nov 4 04:55:37.274466 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 04:55:37.274477 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 4 04:55:37.274668 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Nov 4 04:55:37.274679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 04:55:37.274688 kernel: device-mapper: uevent: version 1.0.3 Nov 4 04:55:37.274696 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 04:55:37.274707 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 04:55:37.274717 kernel: raid6: avx2x4 gen() 35567 MB/s Nov 4 04:55:37.274727 kernel: raid6: avx2x2 gen() 36468 MB/s Nov 4 04:55:37.274735 kernel: raid6: avx2x1 gen() 25615 MB/s Nov 4 04:55:37.274743 kernel: raid6: using algorithm avx2x2 gen() 36468 MB/s Nov 4 04:55:37.274751 kernel: raid6: .... xor() 16132 MB/s, rmw enabled Nov 4 04:55:37.274761 kernel: raid6: using avx2x2 recovery algorithm Nov 4 04:55:37.279804 kernel: xor: automatically using best checksumming function avx Nov 4 04:55:37.279818 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 04:55:37.279828 kernel: BTRFS: device fsid 6f0a5369-79b6-4a87-b9a6-85ec05be306c devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (167) Nov 4 04:55:37.279836 kernel: BTRFS info (device dm-0): first mount of filesystem 6f0a5369-79b6-4a87-b9a6-85ec05be306c Nov 4 04:55:37.279844 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:55:37.279852 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 04:55:37.279865 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 04:55:37.279873 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 04:55:37.279881 kernel: loop: module loaded Nov 4 04:55:37.279889 kernel: loop0: detected capacity change from 0 to 100136 Nov 4 04:55:37.279897 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 04:55:37.279906 systemd[1]: Successfully made /usr/ read-only. Nov 4 04:55:37.279918 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:55:37.279930 systemd[1]: Detected virtualization kvm. Nov 4 04:55:37.279938 systemd[1]: Detected architecture x86-64. Nov 4 04:55:37.279947 systemd[1]: Running in initrd. Nov 4 04:55:37.279955 systemd[1]: No hostname configured, using default hostname. Nov 4 04:55:37.279963 systemd[1]: Hostname set to . Nov 4 04:55:37.279974 systemd[1]: Initializing machine ID from random generator. Nov 4 04:55:37.279982 systemd[1]: Queued start job for default target initrd.target. Nov 4 04:55:37.279990 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:55:37.279999 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:55:37.280007 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:55:37.280064 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 04:55:37.280079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:55:37.280092 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 04:55:37.280101 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 04:55:37.280109 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:55:37.280118 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:55:37.280126 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:55:37.280134 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:55:37.280144 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:55:37.280153 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:55:37.280161 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:55:37.280169 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:55:37.280177 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:55:37.280186 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 04:55:37.280194 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 04:55:37.280205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:55:37.280213 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:55:37.280221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:55:37.280230 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:55:37.280239 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 04:55:37.280247 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 04:55:37.280257 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:55:37.280265 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 04:55:37.280274 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 04:55:37.280282 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 04:55:37.280291 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:55:37.280299 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:55:37.280307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:37.280318 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 04:55:37.280354 systemd-journald[303]: Collecting audit messages is disabled. Nov 4 04:55:37.280377 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 04:55:37.281013 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:55:37.281029 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 04:55:37.281038 kernel: Bridge firewalling registered Nov 4 04:55:37.281047 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:55:37.281060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:55:37.281069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:55:37.281078 systemd-journald[303]: Journal started Nov 4 04:55:37.281096 systemd-journald[303]: Runtime Journal (/run/log/journal/b53f54eb7de84bd0af2b71723207b78f) is 8M, max 78.1M, 70.1M free. Nov 4 04:55:37.240855 systemd-modules-load[304]: Inserted module 'br_netfilter' Nov 4 04:55:37.368944 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:55:37.373809 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:55:37.374666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:37.375980 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:55:37.381744 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 04:55:37.386016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:55:37.394079 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:55:37.397767 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:55:37.418312 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:55:37.424176 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 04:55:37.430228 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:55:37.433893 systemd-tmpfiles[330]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 04:55:37.444967 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:55:37.459371 systemd-resolved[325]: Positive Trust Anchors: Nov 4 04:55:37.460345 systemd-resolved[325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:55:37.460351 systemd-resolved[325]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:55:37.460380 systemd-resolved[325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:55:37.478608 dracut-cmdline[342]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c479bf273e218e23ca82ede45f2bfcd1a1714a33fe5860e964ed0aea09538f01 Nov 4 04:55:37.488371 systemd-resolved[325]: Defaulting to hostname 'linux'. Nov 4 04:55:37.489583 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:55:37.491396 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:55:37.572807 kernel: Loading iSCSI transport class v2.0-870. Nov 4 04:55:37.587807 kernel: iscsi: registered transport (tcp) Nov 4 04:55:37.611891 kernel: iscsi: registered transport (qla4xxx) Nov 4 04:55:37.611921 kernel: QLogic iSCSI HBA Driver Nov 4 04:55:37.640525 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:55:37.658964 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:55:37.662406 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:55:37.712107 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 04:55:37.715935 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 04:55:37.719974 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 04:55:37.748908 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:55:37.753930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:55:37.782237 systemd-udevd[579]: Using default interface naming scheme 'v257'. Nov 4 04:55:37.795648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:55:37.799941 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 04:55:37.824349 dracut-pre-trigger[649]: rd.md=0: removing MD RAID activation Nov 4 04:55:37.845718 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:55:37.851243 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:55:37.864910 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:55:37.868937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:55:37.904469 systemd-networkd[701]: lo: Link UP Nov 4 04:55:37.904483 systemd-networkd[701]: lo: Gained carrier Nov 4 04:55:37.905028 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:55:37.906581 systemd[1]: Reached target network.target - Network. Nov 4 04:55:37.975523 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:55:37.980926 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 04:55:38.095533 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 4 04:55:38.111972 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 4 04:55:38.124575 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 4 04:55:38.137811 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 04:55:38.140433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 4 04:55:38.156822 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 4 04:55:38.296361 kernel: AES CTR mode by8 optimization enabled Nov 4 04:55:38.316921 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 04:55:38.336014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:55:38.342076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:38.368998 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:38.374095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:38.378177 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:55:38.378183 systemd-networkd[701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:55:38.389086 disk-uuid[818]: Primary Header is updated. Nov 4 04:55:38.389086 disk-uuid[818]: Secondary Entries is updated. Nov 4 04:55:38.389086 disk-uuid[818]: Secondary Header is updated. Nov 4 04:55:38.381711 systemd-networkd[701]: eth0: Link UP Nov 4 04:55:38.382362 systemd-networkd[701]: eth0: Gained carrier Nov 4 04:55:38.382374 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:55:38.557342 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 04:55:38.571306 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:38.573380 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:55:38.574412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:55:38.576583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:55:38.580453 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 04:55:38.602205 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:55:39.185942 systemd-networkd[701]: eth0: DHCPv4 address 172.232.15.13/24, gateway 172.232.15.1 acquired from 23.213.15.219 Nov 4 04:55:39.454658 disk-uuid[819]: Warning: The kernel is still using the old partition table. Nov 4 04:55:39.454658 disk-uuid[819]: The new table will be used at the next reboot or after you Nov 4 04:55:39.454658 disk-uuid[819]: run partprobe(8) or kpartx(8) Nov 4 04:55:39.454658 disk-uuid[819]: The operation has completed successfully. Nov 4 04:55:39.466758 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 04:55:39.466952 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 04:55:39.469717 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 04:55:39.523828 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (850) Nov 4 04:55:39.528999 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:39.529023 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:55:39.539392 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 04:55:39.539422 kernel: BTRFS info (device sda6): turning on async discard Nov 4 04:55:39.539435 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 04:55:39.551805 kernel: BTRFS info (device sda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:39.552954 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 04:55:39.555024 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 04:55:39.578843 systemd-networkd[701]: eth0: Gained IPv6LL Nov 4 04:55:39.684270 ignition[869]: Ignition 2.22.0 Nov 4 04:55:39.684292 ignition[869]: Stage: fetch-offline Nov 4 04:55:39.684338 ignition[869]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:39.687168 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:55:39.684352 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 04:55:39.684442 ignition[869]: parsed url from cmdline: "" Nov 4 04:55:39.690951 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 04:55:39.684446 ignition[869]: no config URL provided Nov 4 04:55:39.684452 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 04:55:39.684463 ignition[869]: no config at "/usr/lib/ignition/user.ign" Nov 4 04:55:39.684473 ignition[869]: failed to fetch config: resource requires networking Nov 4 04:55:39.684725 ignition[869]: Ignition finished successfully Nov 4 04:55:39.721406 ignition[875]: Ignition 2.22.0 Nov 4 04:55:39.721423 ignition[875]: Stage: fetch Nov 4 04:55:39.721719 ignition[875]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:39.721729 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 04:55:39.721816 ignition[875]: parsed url from cmdline: "" Nov 4 04:55:39.721821 ignition[875]: no config URL provided Nov 4 04:55:39.721827 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 04:55:39.721836 ignition[875]: no config at "/usr/lib/ignition/user.ign" Nov 4 04:55:39.721869 ignition[875]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 4 04:55:39.816381 ignition[875]: PUT result: OK Nov 4 04:55:39.816504 ignition[875]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 4 04:55:39.928047 ignition[875]: GET result: OK Nov 4 04:55:39.928968 ignition[875]: parsing config with SHA512: 819f6fbb1674a8e7018037913dcbbef47d851303fa36d92f91f080ff0ab01f80f732222979db082cad25fe8c0a93d9f906d67b836df4278c0b5ef295ee656402 Nov 4 04:55:39.932442 unknown[875]: fetched base config from "system" Nov 4 04:55:39.933531 unknown[875]: fetched base config from "system" Nov 4 04:55:39.934395 unknown[875]: fetched user config from "akamai" Nov 4 04:55:39.935062 ignition[875]: fetch: fetch complete Nov 4 04:55:39.935071 ignition[875]: fetch: fetch passed Nov 4 04:55:39.935124 ignition[875]: Ignition finished successfully Nov 4 04:55:39.938272 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 04:55:39.940399 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 04:55:39.974487 ignition[882]: Ignition 2.22.0 Nov 4 04:55:39.974507 ignition[882]: Stage: kargs Nov 4 04:55:39.974639 ignition[882]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:39.974649 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 04:55:39.978322 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 04:55:39.975580 ignition[882]: kargs: kargs passed Nov 4 04:55:39.975627 ignition[882]: Ignition finished successfully Nov 4 04:55:39.982024 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 04:55:40.012406 ignition[889]: Ignition 2.22.0 Nov 4 04:55:40.012427 ignition[889]: Stage: disks Nov 4 04:55:40.015503 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 04:55:40.012544 ignition[889]: no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:40.016590 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 04:55:40.012554 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 04:55:40.018185 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 04:55:40.013141 ignition[889]: disks: disks passed Nov 4 04:55:40.020033 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:55:40.013181 ignition[889]: Ignition finished successfully Nov 4 04:55:40.043498 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:55:40.045472 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:55:40.048388 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 04:55:40.081564 systemd-fsck[897]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 4 04:55:40.083882 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 04:55:40.087723 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 04:55:40.210797 kernel: EXT4-fs (sda9): mounted filesystem c35327fb-3cdd-496e-85aa-9e1b4133507f r/w with ordered data mode. Quota mode: none. Nov 4 04:55:40.211687 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 04:55:40.212886 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 04:55:40.215610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:55:40.218863 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 04:55:40.221337 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 04:55:40.222786 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 04:55:40.222819 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:55:40.230726 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 04:55:40.233906 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 04:55:40.242927 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (905) Nov 4 04:55:40.242952 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:40.246794 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:55:40.259799 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 04:55:40.259824 kernel: BTRFS info (device sda6): turning on async discard Nov 4 04:55:40.259836 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 04:55:40.262471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:55:40.306140 initrd-setup-root[929]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 04:55:40.310628 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Nov 4 04:55:40.316308 initrd-setup-root[943]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 04:55:40.320500 initrd-setup-root[950]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 04:55:40.421830 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 04:55:40.424315 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 04:55:40.426924 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 04:55:40.445819 kernel: BTRFS info (device sda6): last unmount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:40.461158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 04:55:40.476516 ignition[1021]: INFO : Ignition 2.22.0 Nov 4 04:55:40.476516 ignition[1021]: INFO : Stage: mount Nov 4 04:55:40.478339 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:40.478339 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 04:55:40.478339 ignition[1021]: INFO : mount: mount passed Nov 4 04:55:40.478339 ignition[1021]: INFO : Ignition finished successfully Nov 4 04:55:40.479595 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 04:55:40.483867 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 04:55:40.508123 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 04:55:40.509625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 04:55:40.531794 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1031) Nov 4 04:55:40.536321 kernel: BTRFS info (device sda6): first mount of filesystem c6585032-901f-4e89-912e-5749e07725ea Nov 4 04:55:40.536352 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 04:55:40.544937 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 04:55:40.544961 kernel: BTRFS info (device sda6): turning on async discard Nov 4 04:55:40.547293 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 04:55:40.551047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 04:55:40.584378 ignition[1048]: INFO : Ignition 2.22.0 Nov 4 04:55:40.584378 ignition[1048]: INFO : Stage: files Nov 4 04:55:40.586507 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:40.586507 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 04:55:40.586507 ignition[1048]: DEBUG : files: compiled without relabeling support, skipping Nov 4 04:55:40.586507 ignition[1048]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 04:55:40.586507 ignition[1048]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 04:55:40.593343 ignition[1048]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 04:55:40.593343 ignition[1048]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 04:55:40.593343 ignition[1048]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 04:55:40.593343 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 04:55:40.593343 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 04:55:40.590403 unknown[1048]: wrote ssh authorized keys file for user: core Nov 4 04:55:40.892621 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 04:55:41.109050 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 04:55:41.110854 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 04:55:41.110854 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 4 04:55:41.397953 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 4 04:55:41.803188 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 04:55:41.803188 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 04:55:41.807203 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 04:55:42.044036 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 4 04:55:42.384064 ignition[1048]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 04:55:42.384064 ignition[1048]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 04:55:42.409223 ignition[1048]: INFO : files: files passed Nov 4 04:55:42.409223 ignition[1048]: INFO : Ignition finished successfully Nov 4 04:55:42.393153 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 04:55:42.412910 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 04:55:42.417927 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 04:55:42.429638 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 04:55:42.429743 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 04:55:42.442266 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:55:42.443713 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:55:42.445245 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 04:55:42.445011 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:55:42.447204 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 04:55:42.449373 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 04:55:42.492960 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 04:55:42.493118 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 04:55:42.495422 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 04:55:42.497048 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 04:55:42.500199 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 04:55:42.501316 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 04:55:42.527358 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:55:42.531606 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 04:55:42.551695 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 04:55:42.551958 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:55:42.555072 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:55:42.556274 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 04:55:42.558254 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 04:55:42.558357 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 04:55:42.560826 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 04:55:42.562057 systemd[1]: Stopped target basic.target - Basic System. Nov 4 04:55:42.564136 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 04:55:42.566164 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 04:55:42.568409 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 04:55:42.570463 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 04:55:42.572544 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 04:55:42.574762 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 04:55:42.577047 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 04:55:42.579025 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 04:55:42.581042 systemd[1]: Stopped target swap.target - Swaps. Nov 4 04:55:42.583246 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 04:55:42.583388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 04:55:42.585893 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:55:42.587308 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:55:42.589218 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 04:55:42.590225 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:55:42.591198 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 04:55:42.591309 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 04:55:42.594078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 04:55:42.594229 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 04:55:42.595485 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 04:55:42.595620 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 04:55:42.598857 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 04:55:42.604155 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 04:55:42.606863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 04:55:42.607039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:55:42.610559 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 04:55:42.610666 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:55:42.615004 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 04:55:42.615105 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 04:55:42.624455 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 04:55:42.625649 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 04:55:42.637837 ignition[1105]: INFO : Ignition 2.22.0 Nov 4 04:55:42.637837 ignition[1105]: INFO : Stage: umount Nov 4 04:55:42.637837 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 04:55:42.637837 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 4 04:55:42.646254 ignition[1105]: INFO : umount: umount passed Nov 4 04:55:42.646254 ignition[1105]: INFO : Ignition finished successfully Nov 4 04:55:42.640813 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 04:55:42.643852 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 04:55:42.643973 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 04:55:42.647299 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 04:55:42.647350 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 04:55:42.649214 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 04:55:42.649265 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 04:55:42.650972 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 04:55:42.651028 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 04:55:42.674124 systemd[1]: Stopped target network.target - Network. Nov 4 04:55:42.675844 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 04:55:42.675897 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 04:55:42.677755 systemd[1]: Stopped target paths.target - Path Units. Nov 4 04:55:42.679489 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 04:55:42.683821 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:55:42.685277 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 04:55:42.687189 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 04:55:42.689185 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 04:55:42.689230 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 04:55:42.691152 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 04:55:42.691193 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 04:55:42.693046 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 04:55:42.693100 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 04:55:42.695316 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 04:55:42.695372 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 04:55:42.697594 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 04:55:42.699562 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 04:55:42.704197 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 04:55:42.704309 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 04:55:42.706249 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 04:55:42.706359 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 04:55:42.710696 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 04:55:42.710918 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 04:55:42.714656 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 04:55:42.714850 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 04:55:42.720854 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 04:55:42.722650 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 04:55:42.722698 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:55:42.725371 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 04:55:42.727643 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 04:55:42.727711 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 04:55:42.728673 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 04:55:42.728726 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:55:42.730896 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 04:55:42.730951 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 04:55:42.735099 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:55:42.756984 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 04:55:42.758991 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:55:42.761570 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 04:55:42.761663 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 04:55:42.762700 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 04:55:42.762744 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:55:42.763663 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 04:55:42.763716 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 04:55:42.766493 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 04:55:42.766549 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 04:55:42.768449 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 04:55:42.768507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 04:55:42.771273 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 04:55:42.775203 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 04:55:42.775260 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:55:42.777016 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 04:55:42.777074 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:55:42.778364 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 04:55:42.778414 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:55:42.780243 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 04:55:42.780294 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:55:42.784377 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 04:55:42.784434 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:42.787234 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 04:55:42.788630 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 04:55:42.790868 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 04:55:42.790997 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 04:55:42.794068 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 04:55:42.796518 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 04:55:42.818442 systemd[1]: Switching root. Nov 4 04:55:42.854358 systemd-journald[303]: Journal stopped Nov 4 04:55:44.171210 systemd-journald[303]: Received SIGTERM from PID 1 (systemd). Nov 4 04:55:44.171241 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 04:55:44.171255 kernel: SELinux: policy capability open_perms=1 Nov 4 04:55:44.171265 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 04:55:44.171274 kernel: SELinux: policy capability always_check_network=0 Nov 4 04:55:44.171286 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 04:55:44.171296 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 04:55:44.171306 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 04:55:44.171316 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 04:55:44.171325 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 04:55:44.171335 kernel: audit: type=1403 audit(1762232143.011:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 04:55:44.171348 systemd[1]: Successfully loaded SELinux policy in 85.216ms. Nov 4 04:55:44.171359 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.758ms. Nov 4 04:55:44.171371 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 04:55:44.171384 systemd[1]: Detected virtualization kvm. Nov 4 04:55:44.171395 systemd[1]: Detected architecture x86-64. Nov 4 04:55:44.171405 systemd[1]: Detected first boot. Nov 4 04:55:44.171416 systemd[1]: Initializing machine ID from random generator. Nov 4 04:55:44.171426 zram_generator::config[1148]: No configuration found. Nov 4 04:55:44.171437 kernel: Guest personality initialized and is inactive Nov 4 04:55:44.171449 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 04:55:44.171459 kernel: Initialized host personality Nov 4 04:55:44.171469 kernel: NET: Registered PF_VSOCK protocol family Nov 4 04:55:44.171480 systemd[1]: Populated /etc with preset unit settings. Nov 4 04:55:44.171491 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 04:55:44.171501 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 04:55:44.171514 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 04:55:44.171525 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 04:55:44.171536 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 04:55:44.171547 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 04:55:44.171558 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 04:55:44.171568 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 04:55:44.171581 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 04:55:44.171592 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 04:55:44.171603 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 04:55:44.171613 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 04:55:44.171624 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 04:55:44.171635 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 04:55:44.171648 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 04:55:44.171659 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 04:55:44.171672 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 04:55:44.171683 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 04:55:44.171694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 04:55:44.171705 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 04:55:44.171718 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 04:55:44.171729 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 04:55:44.171740 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 04:55:44.171751 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 04:55:44.171762 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 04:55:44.171805 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 04:55:44.171821 systemd[1]: Reached target slices.target - Slice Units. Nov 4 04:55:44.171837 systemd[1]: Reached target swap.target - Swaps. Nov 4 04:55:44.171848 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 04:55:44.171859 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 04:55:44.171870 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 04:55:44.171881 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 04:55:44.171895 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 04:55:44.171906 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 04:55:44.171917 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 04:55:44.171928 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 04:55:44.171938 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 04:55:44.171951 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 04:55:44.171963 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:55:44.171974 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 04:55:44.171984 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 04:55:44.171995 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 04:55:44.172007 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 04:55:44.172018 systemd[1]: Reached target machines.target - Containers. Nov 4 04:55:44.172032 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 04:55:44.172043 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:55:44.172054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 04:55:44.172065 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 04:55:44.172076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:55:44.172087 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:55:44.172098 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:55:44.172111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 04:55:44.172122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:55:44.172134 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 04:55:44.172144 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 04:55:44.172155 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 04:55:44.172166 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 04:55:44.172179 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 04:55:44.172190 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:55:44.172201 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 04:55:44.172212 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 04:55:44.172222 kernel: fuse: init (API version 7.41) Nov 4 04:55:44.172233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 04:55:44.172244 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 04:55:44.172257 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 04:55:44.172268 kernel: ACPI: bus type drm_connector registered Nov 4 04:55:44.172278 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 04:55:44.172289 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:55:44.172300 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 04:55:44.172311 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 04:55:44.172322 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 04:55:44.172335 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 04:55:44.172345 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 04:55:44.172356 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 04:55:44.172367 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 04:55:44.172378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 04:55:44.172389 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 04:55:44.172399 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 04:55:44.172435 systemd-journald[1236]: Collecting audit messages is disabled. Nov 4 04:55:44.172456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:55:44.172468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:55:44.172481 systemd-journald[1236]: Journal started Nov 4 04:55:44.172500 systemd-journald[1236]: Runtime Journal (/run/log/journal/fee392d585c242b58dd226e1ca4bb398) is 8M, max 78.1M, 70.1M free. Nov 4 04:55:43.714083 systemd[1]: Queued start job for default target multi-user.target. Nov 4 04:55:44.176869 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 04:55:43.740604 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 4 04:55:43.741186 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 04:55:44.180118 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:55:44.180341 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:55:44.181540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:55:44.181850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:55:44.183313 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 04:55:44.183559 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 04:55:44.184815 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:55:44.185084 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:55:44.186449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 04:55:44.188138 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 04:55:44.190379 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 04:55:44.191748 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 04:55:44.204439 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 04:55:44.206557 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 04:55:44.210900 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 04:55:44.213061 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 04:55:44.215972 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 04:55:44.215998 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 04:55:44.218318 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 04:55:44.219462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:55:44.224921 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 04:55:44.227997 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 04:55:44.228956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:55:44.231942 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 04:55:44.234022 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:55:44.237498 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:55:44.246645 systemd-journald[1236]: Time spent on flushing to /var/log/journal/fee392d585c242b58dd226e1ca4bb398 is 30.706ms for 985 entries. Nov 4 04:55:44.246645 systemd-journald[1236]: System Journal (/var/log/journal/fee392d585c242b58dd226e1ca4bb398) is 8M, max 588.1M, 580.1M free. Nov 4 04:55:44.302065 systemd-journald[1236]: Received client request to flush runtime journal. Nov 4 04:55:44.302106 kernel: loop1: detected capacity change from 0 to 119080 Nov 4 04:55:44.243984 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 04:55:44.249218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 04:55:44.256142 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 04:55:44.258219 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 04:55:44.259941 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 04:55:44.269846 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 04:55:44.272340 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 04:55:44.276170 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 04:55:44.303554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:55:44.306004 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 04:55:44.322642 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 04:55:44.324365 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Nov 4 04:55:44.324391 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Nov 4 04:55:44.331581 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 04:55:44.336920 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 04:55:44.344799 kernel: loop2: detected capacity change from 0 to 8 Nov 4 04:55:44.362745 kernel: loop3: detected capacity change from 0 to 111544 Nov 4 04:55:44.382865 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 04:55:44.387333 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 04:55:44.393088 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 04:55:44.399842 kernel: loop4: detected capacity change from 0 to 229808 Nov 4 04:55:44.412077 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 04:55:44.423415 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Nov 4 04:55:44.423432 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Nov 4 04:55:44.434699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 04:55:44.446798 kernel: loop5: detected capacity change from 0 to 119080 Nov 4 04:55:44.459803 kernel: loop6: detected capacity change from 0 to 8 Nov 4 04:55:44.463186 kernel: loop7: detected capacity change from 0 to 111544 Nov 4 04:55:44.481934 kernel: loop1: detected capacity change from 0 to 229808 Nov 4 04:55:44.494168 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 04:55:44.502176 (sd-merge)[1300]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Nov 4 04:55:44.508745 (sd-merge)[1300]: Merged extensions into '/usr'. Nov 4 04:55:44.516348 systemd[1]: Reload requested from client PID 1273 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 04:55:44.516440 systemd[1]: Reloading... Nov 4 04:55:44.589113 systemd-resolved[1294]: Positive Trust Anchors: Nov 4 04:55:44.589137 systemd-resolved[1294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 04:55:44.589144 systemd-resolved[1294]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 04:55:44.589171 systemd-resolved[1294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 04:55:44.596571 systemd-resolved[1294]: Defaulting to hostname 'linux'. Nov 4 04:55:44.633803 zram_generator::config[1330]: No configuration found. Nov 4 04:55:44.837127 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 04:55:44.837377 systemd[1]: Reloading finished in 320 ms. Nov 4 04:55:44.866506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 04:55:44.868023 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 04:55:44.869455 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 04:55:44.875495 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 04:55:44.884497 systemd[1]: Starting ensure-sysext.service... Nov 4 04:55:44.888913 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 04:55:44.891754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 04:55:44.914852 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 04:55:44.915053 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 04:55:44.915404 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 04:55:44.915665 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 04:55:44.916596 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 04:55:44.918466 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Nov 4 04:55:44.918566 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Nov 4 04:55:44.919077 systemd[1]: Reload requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Nov 4 04:55:44.919157 systemd[1]: Reloading... Nov 4 04:55:44.928536 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:55:44.928558 systemd-tmpfiles[1378]: Skipping /boot Nov 4 04:55:44.943225 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 04:55:44.943246 systemd-tmpfiles[1378]: Skipping /boot Nov 4 04:55:44.966766 systemd-udevd[1379]: Using default interface naming scheme 'v257'. Nov 4 04:55:45.009966 zram_generator::config[1409]: No configuration found. Nov 4 04:55:45.145977 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 04:55:45.186802 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 4 04:55:45.213823 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 04:55:45.217797 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 04:55:45.246798 kernel: ACPI: button: Power Button [PWRF] Nov 4 04:55:45.273727 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 04:55:45.274190 systemd[1]: Reloading finished in 354 ms. Nov 4 04:55:45.283197 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 04:55:45.286873 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 04:55:45.357064 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:55:45.360485 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 04:55:45.367159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 04:55:45.370428 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 04:55:45.376710 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 04:55:45.380171 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 04:55:45.384506 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:55:45.384669 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:55:45.388047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 04:55:45.394574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 04:55:45.403983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 04:55:45.405016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:55:45.405106 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:55:45.405184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:55:45.414857 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:55:45.415033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:55:45.415188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:55:45.415261 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:55:45.415373 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:55:45.421736 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:55:45.423020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 04:55:45.428739 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 04:55:45.430989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 04:55:45.431092 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 04:55:45.431914 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 04:55:45.450260 systemd[1]: Finished ensure-sysext.service. Nov 4 04:55:45.459671 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 04:55:45.475974 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 04:55:45.488387 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 04:55:45.509417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 4 04:55:45.515477 kernel: EDAC MC: Ver: 3.0.0 Nov 4 04:55:45.526143 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 04:55:45.530414 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 04:55:45.530622 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 04:55:45.558166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 04:55:45.562508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 04:55:45.565878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 04:55:45.566135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 04:55:45.567501 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 04:55:45.567735 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 04:55:45.593326 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 04:55:45.593467 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 04:55:45.598045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 04:55:45.614330 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 04:55:45.617285 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 04:55:45.634399 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 04:55:45.639917 augenrules[1543]: No rules Nov 4 04:55:45.641333 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:55:45.642391 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:55:45.787311 systemd-networkd[1499]: lo: Link UP Nov 4 04:55:45.787322 systemd-networkd[1499]: lo: Gained carrier Nov 4 04:55:45.793280 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 04:55:45.793577 systemd-networkd[1499]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:55:45.793582 systemd-networkd[1499]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 04:55:45.794447 systemd-networkd[1499]: eth0: Link UP Nov 4 04:55:45.794492 systemd[1]: Reached target network.target - Network. Nov 4 04:55:45.795496 systemd-networkd[1499]: eth0: Gained carrier Nov 4 04:55:45.795510 systemd-networkd[1499]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 04:55:45.808485 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 04:55:45.816155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 04:55:45.927198 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 04:55:45.931548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 04:55:45.937710 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 04:55:45.954740 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 04:55:46.085449 ldconfig[1497]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 04:55:46.089143 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 04:55:46.091768 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 04:55:46.114440 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 04:55:46.115996 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 04:55:46.117003 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 04:55:46.118141 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 04:55:46.119105 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 04:55:46.120165 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 04:55:46.121167 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 04:55:46.122118 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 04:55:46.123066 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 04:55:46.123110 systemd[1]: Reached target paths.target - Path Units. Nov 4 04:55:46.123932 systemd[1]: Reached target timers.target - Timer Units. Nov 4 04:55:46.125767 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 04:55:46.128272 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 04:55:46.130976 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 04:55:46.132085 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 04:55:46.133029 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 04:55:46.136395 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 04:55:46.137747 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 04:55:46.139346 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 04:55:46.140940 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 04:55:46.141739 systemd[1]: Reached target basic.target - Basic System. Nov 4 04:55:46.142594 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:55:46.142637 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 04:55:46.143863 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 04:55:46.146331 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 04:55:46.158097 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 04:55:46.163029 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 04:55:46.165957 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 04:55:46.181970 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 04:55:46.183911 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 04:55:46.188250 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 04:55:46.194013 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 04:55:46.195906 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 04:55:46.202360 jq[1569]: false Nov 4 04:55:46.204997 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 04:55:46.209598 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 04:55:46.226374 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing passwd entry cache Nov 4 04:55:46.226385 oslogin_cache_refresh[1571]: Refreshing passwd entry cache Nov 4 04:55:46.228009 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 04:55:46.229859 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 04:55:46.230304 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 04:55:46.232187 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 04:55:46.236170 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 04:55:46.248228 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting users, quitting Nov 4 04:55:46.248220 oslogin_cache_refresh[1571]: Failure getting users, quitting Nov 4 04:55:46.248318 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:55:46.248318 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Refreshing group entry cache Nov 4 04:55:46.248241 oslogin_cache_refresh[1571]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 04:55:46.248290 oslogin_cache_refresh[1571]: Refreshing group entry cache Nov 4 04:55:46.248754 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Failure getting groups, quitting Nov 4 04:55:46.248754 google_oslogin_nss_cache[1571]: oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:55:46.248745 oslogin_cache_refresh[1571]: Failure getting groups, quitting Nov 4 04:55:46.248759 oslogin_cache_refresh[1571]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 04:55:46.251813 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 04:55:46.254166 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 04:55:46.254449 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 04:55:46.271369 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 04:55:46.271704 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 04:55:46.272588 jq[1586]: true Nov 4 04:55:46.273415 coreos-metadata[1566]: Nov 04 04:55:46.259 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 4 04:55:46.281480 extend-filesystems[1570]: Found /dev/sda6 Nov 4 04:55:46.289218 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 04:55:46.289916 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 04:55:46.296543 extend-filesystems[1570]: Found /dev/sda9 Nov 4 04:55:46.297422 extend-filesystems[1570]: Checking size of /dev/sda9 Nov 4 04:55:46.337305 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Nov 4 04:55:46.313922 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 04:55:46.337532 update_engine[1582]: I20251104 04:55:46.318732 1582 main.cc:92] Flatcar Update Engine starting Nov 4 04:55:46.337726 extend-filesystems[1570]: Resized partition /dev/sda9 Nov 4 04:55:46.339467 jq[1607]: true Nov 4 04:55:46.314221 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 04:55:46.339817 extend-filesystems[1619]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 04:55:46.350750 tar[1589]: linux-amd64/LICENSE Nov 4 04:55:46.350750 tar[1589]: linux-amd64/helm Nov 4 04:55:46.368960 dbus-daemon[1567]: [system] SELinux support is enabled Nov 4 04:55:46.369184 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 04:55:46.374091 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 04:55:46.379422 update_engine[1582]: I20251104 04:55:46.379209 1582 update_check_scheduler.cc:74] Next update check in 4m40s Nov 4 04:55:46.374121 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 04:55:46.377816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 04:55:46.377839 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 04:55:46.380954 systemd[1]: Started update-engine.service - Update Engine. Nov 4 04:55:46.393743 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 04:55:46.519095 dbus-daemon[1567]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1499 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 4 04:55:46.519369 systemd-networkd[1499]: eth0: DHCPv4 address 172.232.15.13/24, gateway 172.232.15.1 acquired from 23.213.15.219 Nov 4 04:55:46.522521 systemd-timesyncd[1514]: Network configuration changed, trying to establish connection. Nov 4 04:55:46.529371 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:55:46.533022 systemd-logind[1579]: Watching system buttons on /dev/input/event2 (Power Button) Nov 4 04:55:46.533067 systemd-logind[1579]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 04:55:46.536885 systemd-logind[1579]: New seat seat0. Nov 4 04:55:46.538961 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 4 04:55:46.541856 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 04:55:46.546847 systemd[1]: Starting sshkeys.service... Nov 4 04:55:46.548340 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 04:55:46.586324 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 04:55:46.589235 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 04:55:46.625166 systemd-timesyncd[1514]: Contacted time server 172.232.15.202:123 (0.flatcar.pool.ntp.org). Nov 4 04:55:46.625433 systemd-timesyncd[1514]: Initial clock synchronization to Tue 2025-11-04 04:55:46.696003 UTC. Nov 4 04:55:46.690810 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Nov 4 04:55:46.708788 containerd[1610]: time="2025-11-04T04:55:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 04:55:46.709837 containerd[1610]: time="2025-11-04T04:55:46.709563230Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 4 04:55:46.713594 extend-filesystems[1619]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 4 04:55:46.713594 extend-filesystems[1619]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 4 04:55:46.713594 extend-filesystems[1619]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Nov 4 04:55:46.713307 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 04:55:46.724471 extend-filesystems[1570]: Resized filesystem in /dev/sda9 Nov 4 04:55:46.714837 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741318850Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.88µs" Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741350990Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741388700Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741401050Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741549870Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741564180Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741625110Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741635930Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741863220Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741878500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741894470Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742155 containerd[1610]: time="2025-11-04T04:55:46.741902240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742378 containerd[1610]: time="2025-11-04T04:55:46.742072540Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742378 containerd[1610]: time="2025-11-04T04:55:46.742085230Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742378 containerd[1610]: time="2025-11-04T04:55:46.742173140Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742378 containerd[1610]: time="2025-11-04T04:55:46.742375090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742454 containerd[1610]: time="2025-11-04T04:55:46.742403990Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 04:55:46.742454 containerd[1610]: time="2025-11-04T04:55:46.742413440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 04:55:46.745367 containerd[1610]: time="2025-11-04T04:55:46.743819330Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 04:55:46.751249 containerd[1610]: time="2025-11-04T04:55:46.750366280Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 04:55:46.751249 containerd[1610]: time="2025-11-04T04:55:46.750453800Z" level=info msg="metadata content store policy set" policy=shared Nov 4 04:55:46.755484 containerd[1610]: time="2025-11-04T04:55:46.755443360Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 04:55:46.755522 containerd[1610]: time="2025-11-04T04:55:46.755508520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:55:46.755695 containerd[1610]: time="2025-11-04T04:55:46.755662750Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 4 04:55:46.755695 containerd[1610]: time="2025-11-04T04:55:46.755689710Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 04:55:46.755755 containerd[1610]: time="2025-11-04T04:55:46.755707900Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 04:55:46.755755 containerd[1610]: time="2025-11-04T04:55:46.755720640Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 04:55:46.755755 containerd[1610]: time="2025-11-04T04:55:46.755751050Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 04:55:46.755837 containerd[1610]: time="2025-11-04T04:55:46.755760330Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 04:55:46.755837 containerd[1610]: time="2025-11-04T04:55:46.755794670Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 04:55:46.755837 containerd[1610]: time="2025-11-04T04:55:46.755812350Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 04:55:46.755837 containerd[1610]: time="2025-11-04T04:55:46.755821700Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 04:55:46.755837 containerd[1610]: time="2025-11-04T04:55:46.755837620Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 04:55:46.755936 containerd[1610]: time="2025-11-04T04:55:46.755846290Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 04:55:46.755936 containerd[1610]: time="2025-11-04T04:55:46.755857290Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 04:55:46.755970 containerd[1610]: time="2025-11-04T04:55:46.755960090Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 04:55:46.755996 containerd[1610]: time="2025-11-04T04:55:46.755978400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 04:55:46.755996 containerd[1610]: time="2025-11-04T04:55:46.755991330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 04:55:46.756037 containerd[1610]: time="2025-11-04T04:55:46.756005870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 04:55:46.756037 containerd[1610]: time="2025-11-04T04:55:46.756015500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 04:55:46.756037 containerd[1610]: time="2025-11-04T04:55:46.756024530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 04:55:46.756091 containerd[1610]: time="2025-11-04T04:55:46.756040230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 04:55:46.756091 containerd[1610]: time="2025-11-04T04:55:46.756049920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 04:55:46.756091 containerd[1610]: time="2025-11-04T04:55:46.756058710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 04:55:46.756091 containerd[1610]: time="2025-11-04T04:55:46.756075750Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 04:55:46.756091 containerd[1610]: time="2025-11-04T04:55:46.756084600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 04:55:46.756173 containerd[1610]: time="2025-11-04T04:55:46.756103440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 04:55:46.756173 containerd[1610]: time="2025-11-04T04:55:46.756139750Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 04:55:46.756173 containerd[1610]: time="2025-11-04T04:55:46.756150090Z" level=info msg="Start snapshots syncer" Nov 4 04:55:46.757894 containerd[1610]: time="2025-11-04T04:55:46.757862720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 04:55:46.758105 containerd[1610]: time="2025-11-04T04:55:46.758063210Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 04:55:46.758225 containerd[1610]: time="2025-11-04T04:55:46.758126280Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.759850700Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760014680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760035930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760046410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760056090Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760065620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760074960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760083920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760092710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 04:55:46.760142 containerd[1610]: time="2025-11-04T04:55:46.760116390Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761123850Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761150630Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761221860Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761240300Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761248810Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761264930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761274030Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761289100Z" level=info msg="runtime interface created" Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761294380Z" level=info msg="created NRI interface" Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761301680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761311530Z" level=info msg="Connect containerd service" Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.761328040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 04:55:46.763201 containerd[1610]: time="2025-11-04T04:55:46.762375930Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 04:55:46.765157 coreos-metadata[1651]: Nov 04 04:55:46.765 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 4 04:55:46.825886 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 4 04:55:46.827593 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 4 04:55:46.833338 dbus-daemon[1567]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1648 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 4 04:55:46.842072 systemd[1]: Starting polkit.service - Authorization Manager... Nov 4 04:55:46.882319 coreos-metadata[1651]: Nov 04 04:55:46.882 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 4 04:55:46.911368 locksmithd[1625]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 04:55:46.935181 sshd_keygen[1615]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 04:55:46.958488 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 04:55:46.965246 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.972620000Z" level=info msg="Start subscribing containerd event" Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.972701360Z" level=info msg="Start recovering state" Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.972885160Z" level=info msg="Start event monitor" Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.972933720Z" level=info msg="Start cni network conf syncer for default" Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.972942070Z" level=info msg="Start streaming server" Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.972954160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.972961400Z" level=info msg="runtime interface starting up..." Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.972967650Z" level=info msg="starting plugins..." Nov 4 04:55:46.973213 containerd[1610]: time="2025-11-04T04:55:46.973014850Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 04:55:46.973714 containerd[1610]: time="2025-11-04T04:55:46.973684370Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 04:55:46.975952 containerd[1610]: time="2025-11-04T04:55:46.975921210Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 04:55:46.977169 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 04:55:46.980401 containerd[1610]: time="2025-11-04T04:55:46.978838840Z" level=info msg="containerd successfully booted in 0.276531s" Nov 4 04:55:46.987419 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 04:55:46.987679 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 04:55:46.993245 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 04:55:47.007602 polkitd[1667]: Started polkitd version 126 Nov 4 04:55:47.015716 polkitd[1667]: Loading rules from directory /etc/polkit-1/rules.d Nov 4 04:55:47.016166 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 04:55:47.017586 polkitd[1667]: Loading rules from directory /run/polkit-1/rules.d Nov 4 04:55:47.018371 polkitd[1667]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 4 04:55:47.018808 polkitd[1667]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 4 04:55:47.019522 polkitd[1667]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 4 04:55:47.019651 polkitd[1667]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 4 04:55:47.020950 coreos-metadata[1651]: Nov 04 04:55:47.020 INFO Fetch successful Nov 4 04:55:47.021111 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 04:55:47.022877 polkitd[1667]: Finished loading, compiling and executing 2 rules Nov 4 04:55:47.025090 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 04:55:47.026214 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 04:55:47.027511 systemd[1]: Started polkit.service - Authorization Manager. Nov 4 04:55:47.031056 dbus-daemon[1567]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 4 04:55:47.036853 polkitd[1667]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 4 04:55:47.051509 systemd-resolved[1294]: System hostname changed to '172-232-15-13'. Nov 4 04:55:47.051574 systemd-hostnamed[1648]: Hostname set to <172-232-15-13> (transient) Nov 4 04:55:47.055918 update-ssh-keys[1702]: Updated "/home/core/.ssh/authorized_keys" Nov 4 04:55:47.056731 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 04:55:47.061464 systemd[1]: Finished sshkeys.service. Nov 4 04:55:47.143769 tar[1589]: linux-amd64/README.md Nov 4 04:55:47.157426 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 04:55:47.258303 systemd-networkd[1499]: eth0: Gained IPv6LL Nov 4 04:55:47.261329 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 04:55:47.264597 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 04:55:47.269892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:55:47.270991 coreos-metadata[1566]: Nov 04 04:55:47.269 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 4 04:55:47.273056 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 04:55:47.305742 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 04:55:47.362089 coreos-metadata[1566]: Nov 04 04:55:47.362 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 4 04:55:47.549309 coreos-metadata[1566]: Nov 04 04:55:47.549 INFO Fetch successful Nov 4 04:55:47.549560 coreos-metadata[1566]: Nov 04 04:55:47.549 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 4 04:55:47.812468 coreos-metadata[1566]: Nov 04 04:55:47.812 INFO Fetch successful Nov 4 04:55:47.934714 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 04:55:47.936159 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 04:55:48.238425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:55:48.239740 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 04:55:48.241909 systemd[1]: Startup finished in 3.043s (kernel) + 6.209s (initrd) + 5.313s (userspace) = 14.566s. Nov 4 04:55:48.245186 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:55:48.826867 kubelet[1748]: E1104 04:55:48.826764 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:55:48.830820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:55:48.831219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:55:48.831632 systemd[1]: kubelet.service: Consumed 931ms CPU time, 269.3M memory peak. Nov 4 04:55:49.538290 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 04:55:49.539474 systemd[1]: Started sshd@0-172.232.15.13:22-139.178.89.65:35562.service - OpenSSH per-connection server daemon (139.178.89.65:35562). Nov 4 04:55:49.867218 sshd[1760]: Accepted publickey for core from 139.178.89.65 port 35562 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:55:49.870386 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:55:49.879492 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 04:55:49.881338 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 04:55:49.890799 systemd-logind[1579]: New session 1 of user core. Nov 4 04:55:49.904452 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 04:55:49.908123 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 04:55:49.924358 (systemd)[1765]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 04:55:49.927124 systemd-logind[1579]: New session c1 of user core. Nov 4 04:55:50.061655 systemd[1765]: Queued start job for default target default.target. Nov 4 04:55:50.072272 systemd[1765]: Created slice app.slice - User Application Slice. Nov 4 04:55:50.072309 systemd[1765]: Reached target paths.target - Paths. Nov 4 04:55:50.072359 systemd[1765]: Reached target timers.target - Timers. Nov 4 04:55:50.074034 systemd[1765]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 04:55:50.086537 systemd[1765]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 04:55:50.086661 systemd[1765]: Reached target sockets.target - Sockets. Nov 4 04:55:50.086706 systemd[1765]: Reached target basic.target - Basic System. Nov 4 04:55:50.086754 systemd[1765]: Reached target default.target - Main User Target. Nov 4 04:55:50.086816 systemd[1765]: Startup finished in 153ms. Nov 4 04:55:50.087133 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 04:55:50.095946 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 04:55:50.284935 systemd[1]: Started sshd@1-172.232.15.13:22-139.178.89.65:35570.service - OpenSSH per-connection server daemon (139.178.89.65:35570). Nov 4 04:55:50.601520 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 35570 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:55:50.603569 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:55:50.609841 systemd-logind[1579]: New session 2 of user core. Nov 4 04:55:50.617913 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 04:55:50.766657 sshd[1779]: Connection closed by 139.178.89.65 port 35570 Nov 4 04:55:50.767409 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Nov 4 04:55:50.773285 systemd[1]: sshd@1-172.232.15.13:22-139.178.89.65:35570.service: Deactivated successfully. Nov 4 04:55:50.775356 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 04:55:50.776505 systemd-logind[1579]: Session 2 logged out. Waiting for processes to exit. Nov 4 04:55:50.778551 systemd-logind[1579]: Removed session 2. Nov 4 04:55:50.845407 systemd[1]: Started sshd@2-172.232.15.13:22-139.178.89.65:35584.service - OpenSSH per-connection server daemon (139.178.89.65:35584). Nov 4 04:55:51.154239 sshd[1785]: Accepted publickey for core from 139.178.89.65 port 35584 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:55:51.156131 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:55:51.161476 systemd-logind[1579]: New session 3 of user core. Nov 4 04:55:51.163928 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 04:55:51.306897 sshd[1788]: Connection closed by 139.178.89.65 port 35584 Nov 4 04:55:51.307446 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 4 04:55:51.312267 systemd-logind[1579]: Session 3 logged out. Waiting for processes to exit. Nov 4 04:55:51.312460 systemd[1]: sshd@2-172.232.15.13:22-139.178.89.65:35584.service: Deactivated successfully. Nov 4 04:55:51.314690 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 04:55:51.316220 systemd-logind[1579]: Removed session 3. Nov 4 04:55:51.377568 systemd[1]: Started sshd@3-172.232.15.13:22-139.178.89.65:35586.service - OpenSSH per-connection server daemon (139.178.89.65:35586). Nov 4 04:55:51.697118 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 35586 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:55:51.699253 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:55:51.705176 systemd-logind[1579]: New session 4 of user core. Nov 4 04:55:51.712985 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 04:55:51.867905 sshd[1797]: Connection closed by 139.178.89.65 port 35586 Nov 4 04:55:51.868437 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Nov 4 04:55:51.873359 systemd[1]: sshd@3-172.232.15.13:22-139.178.89.65:35586.service: Deactivated successfully. Nov 4 04:55:51.876198 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 04:55:51.877173 systemd-logind[1579]: Session 4 logged out. Waiting for processes to exit. Nov 4 04:55:51.878345 systemd-logind[1579]: Removed session 4. Nov 4 04:55:51.928188 systemd[1]: Started sshd@4-172.232.15.13:22-139.178.89.65:35600.service - OpenSSH per-connection server daemon (139.178.89.65:35600). Nov 4 04:55:52.236618 sshd[1803]: Accepted publickey for core from 139.178.89.65 port 35600 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:55:52.238167 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:55:52.243672 systemd-logind[1579]: New session 5 of user core. Nov 4 04:55:52.245909 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 04:55:52.358555 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 04:55:52.358985 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:55:52.373359 sudo[1807]: pam_unix(sudo:session): session closed for user root Nov 4 04:55:52.426047 sshd[1806]: Connection closed by 139.178.89.65 port 35600 Nov 4 04:55:52.426858 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Nov 4 04:55:52.432024 systemd-logind[1579]: Session 5 logged out. Waiting for processes to exit. Nov 4 04:55:52.432628 systemd[1]: sshd@4-172.232.15.13:22-139.178.89.65:35600.service: Deactivated successfully. Nov 4 04:55:52.435179 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 04:55:52.437066 systemd-logind[1579]: Removed session 5. Nov 4 04:55:52.488937 systemd[1]: Started sshd@5-172.232.15.13:22-139.178.89.65:35608.service - OpenSSH per-connection server daemon (139.178.89.65:35608). Nov 4 04:55:52.794754 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 35608 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:55:52.796979 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:55:52.803549 systemd-logind[1579]: New session 6 of user core. Nov 4 04:55:52.815208 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 04:55:52.909874 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 04:55:52.910430 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:55:52.918578 sudo[1818]: pam_unix(sudo:session): session closed for user root Nov 4 04:55:52.926619 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 04:55:52.926995 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:55:52.938585 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 04:55:52.987992 augenrules[1840]: No rules Nov 4 04:55:52.989579 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 04:55:52.990005 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 04:55:52.991126 sudo[1817]: pam_unix(sudo:session): session closed for user root Nov 4 04:55:53.043816 sshd[1816]: Connection closed by 139.178.89.65 port 35608 Nov 4 04:55:53.044339 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Nov 4 04:55:53.049153 systemd-logind[1579]: Session 6 logged out. Waiting for processes to exit. Nov 4 04:55:53.049467 systemd[1]: sshd@5-172.232.15.13:22-139.178.89.65:35608.service: Deactivated successfully. Nov 4 04:55:53.051665 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 04:55:53.055458 systemd-logind[1579]: Removed session 6. Nov 4 04:55:53.111713 systemd[1]: Started sshd@6-172.232.15.13:22-139.178.89.65:35618.service - OpenSSH per-connection server daemon (139.178.89.65:35618). Nov 4 04:55:53.407496 sshd[1849]: Accepted publickey for core from 139.178.89.65 port 35618 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:55:53.409355 sshd-session[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:55:53.414026 systemd-logind[1579]: New session 7 of user core. Nov 4 04:55:53.420906 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 04:55:53.517669 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 04:55:53.518022 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 04:55:53.874767 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 04:55:53.885095 (dockerd)[1871]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 04:55:54.138185 dockerd[1871]: time="2025-11-04T04:55:54.137833851Z" level=info msg="Starting up" Nov 4 04:55:54.138650 dockerd[1871]: time="2025-11-04T04:55:54.138576849Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 04:55:54.150860 dockerd[1871]: time="2025-11-04T04:55:54.150753935Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 04:55:54.176525 systemd[1]: var-lib-docker-metacopy\x2dcheck2889118590-merged.mount: Deactivated successfully. Nov 4 04:55:54.204280 dockerd[1871]: time="2025-11-04T04:55:54.204217780Z" level=info msg="Loading containers: start." Nov 4 04:55:54.215819 kernel: Initializing XFRM netlink socket Nov 4 04:55:54.493370 systemd-networkd[1499]: docker0: Link UP Nov 4 04:55:54.497401 dockerd[1871]: time="2025-11-04T04:55:54.497362782Z" level=info msg="Loading containers: done." Nov 4 04:55:54.511485 dockerd[1871]: time="2025-11-04T04:55:54.511371965Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 04:55:54.511485 dockerd[1871]: time="2025-11-04T04:55:54.511452043Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 04:55:54.511832 dockerd[1871]: time="2025-11-04T04:55:54.511750549Z" level=info msg="Initializing buildkit" Nov 4 04:55:54.534551 dockerd[1871]: time="2025-11-04T04:55:54.534311659Z" level=info msg="Completed buildkit initialization" Nov 4 04:55:54.541705 dockerd[1871]: time="2025-11-04T04:55:54.541649933Z" level=info msg="Daemon has completed initialization" Nov 4 04:55:54.542427 dockerd[1871]: time="2025-11-04T04:55:54.541926744Z" level=info msg="API listen on /run/docker.sock" Nov 4 04:55:54.542206 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 04:55:55.375174 containerd[1610]: time="2025-11-04T04:55:55.375114030Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 04:55:56.206355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218993216.mount: Deactivated successfully. Nov 4 04:55:57.152699 containerd[1610]: time="2025-11-04T04:55:57.151804710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:57.152699 containerd[1610]: time="2025-11-04T04:55:57.152614194Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=28442726" Nov 4 04:55:57.153404 containerd[1610]: time="2025-11-04T04:55:57.153370394Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:57.155233 containerd[1610]: time="2025-11-04T04:55:57.155210787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:57.156232 containerd[1610]: time="2025-11-04T04:55:57.156198385Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.7810448s" Nov 4 04:55:57.156269 containerd[1610]: time="2025-11-04T04:55:57.156234058Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 04:55:57.156725 containerd[1610]: time="2025-11-04T04:55:57.156695051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 04:55:58.599004 containerd[1610]: time="2025-11-04T04:55:58.598937934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:58.600547 containerd[1610]: time="2025-11-04T04:55:58.600506689Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26012689" Nov 4 04:55:58.601101 containerd[1610]: time="2025-11-04T04:55:58.601061560Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:58.604521 containerd[1610]: time="2025-11-04T04:55:58.603533726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:58.604521 containerd[1610]: time="2025-11-04T04:55:58.604317031Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.44759093s" Nov 4 04:55:58.604521 containerd[1610]: time="2025-11-04T04:55:58.604339327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 04:55:58.605282 containerd[1610]: time="2025-11-04T04:55:58.605264912Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 04:55:58.870528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 04:55:58.872722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:55:59.090350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:55:59.099295 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 04:55:59.146023 kubelet[2151]: E1104 04:55:59.145818 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 04:55:59.151871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 04:55:59.152150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 04:55:59.152623 systemd[1]: kubelet.service: Consumed 205ms CPU time, 108.8M memory peak. Nov 4 04:55:59.967934 containerd[1610]: time="2025-11-04T04:55:59.967811152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:59.969194 containerd[1610]: time="2025-11-04T04:55:59.969020889Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20147431" Nov 4 04:55:59.969809 containerd[1610]: time="2025-11-04T04:55:59.969751231Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:59.972819 containerd[1610]: time="2025-11-04T04:55:59.972767318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:55:59.973748 containerd[1610]: time="2025-11-04T04:55:59.973719055Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.368377058s" Nov 4 04:55:59.973853 containerd[1610]: time="2025-11-04T04:55:59.973835132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 04:55:59.974396 containerd[1610]: time="2025-11-04T04:55:59.974344771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 04:56:01.204076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336638505.mount: Deactivated successfully. Nov 4 04:56:01.589511 containerd[1610]: time="2025-11-04T04:56:01.589387486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:01.590440 containerd[1610]: time="2025-11-04T04:56:01.590325687Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=20341046" Nov 4 04:56:01.590957 containerd[1610]: time="2025-11-04T04:56:01.590925527Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:01.592421 containerd[1610]: time="2025-11-04T04:56:01.592387813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:01.593109 containerd[1610]: time="2025-11-04T04:56:01.593086718Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.618707228s" Nov 4 04:56:01.593182 containerd[1610]: time="2025-11-04T04:56:01.593168029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 04:56:01.593847 containerd[1610]: time="2025-11-04T04:56:01.593823404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 04:56:02.235702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119882097.mount: Deactivated successfully. Nov 4 04:56:03.036169 containerd[1610]: time="2025-11-04T04:56:03.035247992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:03.036169 containerd[1610]: time="2025-11-04T04:56:03.036139044Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Nov 4 04:56:03.036733 containerd[1610]: time="2025-11-04T04:56:03.036710121Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:03.039172 containerd[1610]: time="2025-11-04T04:56:03.039151074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:03.040253 containerd[1610]: time="2025-11-04T04:56:03.040231564Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.446379203s" Nov 4 04:56:03.040327 containerd[1610]: time="2025-11-04T04:56:03.040314181Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 04:56:03.041502 containerd[1610]: time="2025-11-04T04:56:03.041458157Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 04:56:03.665382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292003640.mount: Deactivated successfully. Nov 4 04:56:03.672373 containerd[1610]: time="2025-11-04T04:56:03.672332338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:56:03.673093 containerd[1610]: time="2025-11-04T04:56:03.673057426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 04:56:03.673844 containerd[1610]: time="2025-11-04T04:56:03.673809422Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:56:03.675714 containerd[1610]: time="2025-11-04T04:56:03.675681210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 04:56:03.676347 containerd[1610]: time="2025-11-04T04:56:03.676324082Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 634.819317ms" Nov 4 04:56:03.676426 containerd[1610]: time="2025-11-04T04:56:03.676411444Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 04:56:03.677197 containerd[1610]: time="2025-11-04T04:56:03.677165973Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 04:56:04.566414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4156213801.mount: Deactivated successfully. Nov 4 04:56:06.225141 containerd[1610]: time="2025-11-04T04:56:06.225011880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:06.226717 containerd[1610]: time="2025-11-04T04:56:06.226461246Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Nov 4 04:56:06.227373 containerd[1610]: time="2025-11-04T04:56:06.227337480Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:06.229960 containerd[1610]: time="2025-11-04T04:56:06.229921631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:06.231325 containerd[1610]: time="2025-11-04T04:56:06.231101948Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.553911373s" Nov 4 04:56:06.231404 containerd[1610]: time="2025-11-04T04:56:06.231388519Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 04:56:08.466555 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:08.466723 systemd[1]: kubelet.service: Consumed 205ms CPU time, 108.8M memory peak. Nov 4 04:56:08.469358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:08.496925 systemd[1]: Reload requested from client PID 2310 ('systemctl') (unit session-7.scope)... Nov 4 04:56:08.496948 systemd[1]: Reloading... Nov 4 04:56:08.646811 zram_generator::config[2351]: No configuration found. Nov 4 04:56:08.879560 systemd[1]: Reloading finished in 382 ms. Nov 4 04:56:08.951372 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 04:56:08.951613 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 04:56:08.952055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:08.952170 systemd[1]: kubelet.service: Consumed 139ms CPU time, 98.3M memory peak. Nov 4 04:56:08.953867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:09.130039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:09.139106 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:56:09.190827 kubelet[2409]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:56:09.190827 kubelet[2409]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:56:09.190827 kubelet[2409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:56:09.190827 kubelet[2409]: I1104 04:56:09.190614 2409 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:56:09.592571 kubelet[2409]: I1104 04:56:09.592216 2409 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 04:56:09.592737 kubelet[2409]: I1104 04:56:09.592723 2409 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:56:09.593808 kubelet[2409]: I1104 04:56:09.593475 2409 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 04:56:09.629340 kubelet[2409]: I1104 04:56:09.629321 2409 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:56:09.629592 kubelet[2409]: E1104 04:56:09.629543 2409 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.15.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.15.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 04:56:09.638070 kubelet[2409]: I1104 04:56:09.638033 2409 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:56:09.643671 kubelet[2409]: I1104 04:56:09.643653 2409 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:56:09.644038 kubelet[2409]: I1104 04:56:09.644018 2409 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:56:09.644233 kubelet[2409]: I1104 04:56:09.644095 2409 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-15-13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:56:09.644387 kubelet[2409]: I1104 04:56:09.644373 2409 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:56:09.644442 kubelet[2409]: I1104 04:56:09.644434 2409 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 04:56:09.644579 kubelet[2409]: I1104 04:56:09.644568 2409 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:56:09.646883 kubelet[2409]: I1104 04:56:09.646868 2409 kubelet.go:480] "Attempting to sync node with API server" Nov 4 04:56:09.646995 kubelet[2409]: I1104 04:56:09.646979 2409 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:56:09.647068 kubelet[2409]: I1104 04:56:09.647059 2409 kubelet.go:386] "Adding apiserver pod source" Nov 4 04:56:09.647122 kubelet[2409]: I1104 04:56:09.647114 2409 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:56:09.652225 kubelet[2409]: E1104 04:56:09.652188 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.15.13:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-15-13&limit=500&resourceVersion=0\": dial tcp 172.232.15.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 04:56:09.652332 kubelet[2409]: I1104 04:56:09.652297 2409 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:56:09.652913 kubelet[2409]: I1104 04:56:09.652854 2409 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 04:56:09.653841 kubelet[2409]: W1104 04:56:09.653761 2409 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 04:56:09.658032 kubelet[2409]: I1104 04:56:09.658015 2409 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:56:09.658385 kubelet[2409]: I1104 04:56:09.658361 2409 server.go:1289] "Started kubelet" Nov 4 04:56:09.665464 kubelet[2409]: I1104 04:56:09.662858 2409 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:56:09.666305 kubelet[2409]: I1104 04:56:09.666272 2409 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:56:09.670807 kubelet[2409]: I1104 04:56:09.670077 2409 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:56:09.671454 kubelet[2409]: I1104 04:56:09.671432 2409 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:56:09.675698 kubelet[2409]: I1104 04:56:09.675663 2409 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:56:09.675922 kubelet[2409]: E1104 04:56:09.675894 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.15.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.15.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 04:56:09.677616 kubelet[2409]: I1104 04:56:09.677310 2409 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:56:09.677616 kubelet[2409]: E1104 04:56:09.677515 2409 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-15-13\" not found" Nov 4 04:56:09.679027 kubelet[2409]: I1104 04:56:09.679009 2409 factory.go:223] Registration of the systemd container factory successfully Nov 4 04:56:09.679178 kubelet[2409]: I1104 04:56:09.679162 2409 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:56:09.680600 kubelet[2409]: E1104 04:56:09.679551 2409 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.15.13:6443/api/v1/namespaces/default/events\": dial tcp 172.232.15.13:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-15-13.1874b4d2543f811c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-15-13,UID:172-232-15-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-15-13,},FirstTimestamp:2025-11-04 04:56:09.658286364 +0000 UTC m=+0.512310234,LastTimestamp:2025-11-04 04:56:09.658286364 +0000 UTC m=+0.512310234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-15-13,}" Nov 4 04:56:09.680799 kubelet[2409]: E1104 04:56:09.680755 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.15.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-15-13?timeout=10s\": dial tcp 172.232.15.13:6443: connect: connection refused" interval="200ms" Nov 4 04:56:09.682056 kubelet[2409]: I1104 04:56:09.681427 2409 server.go:317] "Adding debug handlers to kubelet server" Nov 4 04:56:09.683738 kubelet[2409]: E1104 04:56:09.683719 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.15.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.15.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 04:56:09.683985 kubelet[2409]: I1104 04:56:09.683959 2409 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:56:09.684030 kubelet[2409]: I1104 04:56:09.684022 2409 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:56:09.684854 kubelet[2409]: I1104 04:56:09.684838 2409 factory.go:223] Registration of the containerd container factory successfully Nov 4 04:56:09.695052 kubelet[2409]: I1104 04:56:09.695010 2409 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 04:56:09.696471 kubelet[2409]: I1104 04:56:09.696441 2409 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 04:56:09.696471 kubelet[2409]: I1104 04:56:09.696473 2409 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 04:56:09.696547 kubelet[2409]: I1104 04:56:09.696492 2409 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:56:09.696547 kubelet[2409]: I1104 04:56:09.696500 2409 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 04:56:09.696547 kubelet[2409]: E1104 04:56:09.696542 2409 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:56:09.709360 kubelet[2409]: E1104 04:56:09.709322 2409 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.15.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.15.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 04:56:09.709678 kubelet[2409]: E1104 04:56:09.709644 2409 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:56:09.715015 kubelet[2409]: I1104 04:56:09.714986 2409 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:56:09.715015 kubelet[2409]: I1104 04:56:09.715009 2409 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:56:09.715090 kubelet[2409]: I1104 04:56:09.715027 2409 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:56:09.717082 kubelet[2409]: I1104 04:56:09.717052 2409 policy_none.go:49] "None policy: Start" Nov 4 04:56:09.717082 kubelet[2409]: I1104 04:56:09.717075 2409 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:56:09.717151 kubelet[2409]: I1104 04:56:09.717090 2409 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:56:09.724324 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 04:56:09.737511 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 04:56:09.741307 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 04:56:09.752072 kubelet[2409]: E1104 04:56:09.751868 2409 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 04:56:09.752985 kubelet[2409]: I1104 04:56:09.752714 2409 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:56:09.752985 kubelet[2409]: I1104 04:56:09.752733 2409 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:56:09.752985 kubelet[2409]: I1104 04:56:09.752922 2409 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:56:09.754242 kubelet[2409]: E1104 04:56:09.754227 2409 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:56:09.754345 kubelet[2409]: E1104 04:56:09.754331 2409 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-15-13\" not found" Nov 4 04:56:09.809942 systemd[1]: Created slice kubepods-burstable-podb178c3960a553838d104360f2473b7fb.slice - libcontainer container kubepods-burstable-podb178c3960a553838d104360f2473b7fb.slice. Nov 4 04:56:09.835526 kubelet[2409]: E1104 04:56:09.835496 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:09.839572 systemd[1]: Created slice kubepods-burstable-pod5c245969489a944ea2b5a55899ab0fa6.slice - libcontainer container kubepods-burstable-pod5c245969489a944ea2b5a55899ab0fa6.slice. Nov 4 04:56:09.851807 kubelet[2409]: E1104 04:56:09.851117 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:09.854488 kubelet[2409]: I1104 04:56:09.854422 2409 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-13" Nov 4 04:56:09.855110 kubelet[2409]: E1104 04:56:09.855075 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.15.13:6443/api/v1/nodes\": dial tcp 172.232.15.13:6443: connect: connection refused" node="172-232-15-13" Nov 4 04:56:09.855254 systemd[1]: Created slice kubepods-burstable-pod22da257c88279de139608874613f904a.slice - libcontainer container kubepods-burstable-pod22da257c88279de139608874613f904a.slice. Nov 4 04:56:09.857294 kubelet[2409]: E1104 04:56:09.857141 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:09.881826 kubelet[2409]: E1104 04:56:09.881802 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.15.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-15-13?timeout=10s\": dial tcp 172.232.15.13:6443: connect: connection refused" interval="400ms" Nov 4 04:56:09.985709 kubelet[2409]: I1104 04:56:09.985652 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-ca-certs\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:09.985709 kubelet[2409]: I1104 04:56:09.985694 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-flexvolume-dir\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:09.985865 kubelet[2409]: I1104 04:56:09.985718 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:09.985865 kubelet[2409]: I1104 04:56:09.985735 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22da257c88279de139608874613f904a-kubeconfig\") pod \"kube-scheduler-172-232-15-13\" (UID: \"22da257c88279de139608874613f904a\") " pod="kube-system/kube-scheduler-172-232-15-13" Nov 4 04:56:09.985865 kubelet[2409]: I1104 04:56:09.985751 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b178c3960a553838d104360f2473b7fb-k8s-certs\") pod \"kube-apiserver-172-232-15-13\" (UID: \"b178c3960a553838d104360f2473b7fb\") " pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:09.985865 kubelet[2409]: I1104 04:56:09.985764 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-k8s-certs\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:09.986140 kubelet[2409]: I1104 04:56:09.986106 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-kubeconfig\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:09.986206 kubelet[2409]: I1104 04:56:09.986151 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b178c3960a553838d104360f2473b7fb-ca-certs\") pod \"kube-apiserver-172-232-15-13\" (UID: \"b178c3960a553838d104360f2473b7fb\") " pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:09.986206 kubelet[2409]: I1104 04:56:09.986170 2409 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b178c3960a553838d104360f2473b7fb-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-15-13\" (UID: \"b178c3960a553838d104360f2473b7fb\") " pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:10.059875 kubelet[2409]: I1104 04:56:10.059844 2409 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-13" Nov 4 04:56:10.060178 kubelet[2409]: E1104 04:56:10.060122 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.15.13:6443/api/v1/nodes\": dial tcp 172.232.15.13:6443: connect: connection refused" node="172-232-15-13" Nov 4 04:56:10.136803 kubelet[2409]: E1104 04:56:10.136755 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:10.137382 containerd[1610]: time="2025-11-04T04:56:10.137353366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-15-13,Uid:b178c3960a553838d104360f2473b7fb,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:10.151718 kubelet[2409]: E1104 04:56:10.151611 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:10.154661 containerd[1610]: time="2025-11-04T04:56:10.152293336Z" level=info msg="connecting to shim fa10d576cb713b8e922e1e980a29dd019be47a15ae8603b07753e7f0ff24399c" address="unix:///run/containerd/s/7493c7b9bd7e449492a6aa6689536e59b80b18ed61afb9cba0a0402b78319c85" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:10.155270 containerd[1610]: time="2025-11-04T04:56:10.155249581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-15-13,Uid:5c245969489a944ea2b5a55899ab0fa6,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:10.158130 kubelet[2409]: E1104 04:56:10.158113 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:10.158745 containerd[1610]: time="2025-11-04T04:56:10.158725080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-15-13,Uid:22da257c88279de139608874613f904a,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:10.191819 containerd[1610]: time="2025-11-04T04:56:10.191731994Z" level=info msg="connecting to shim c63e8df563a244bb1fd825e2e5ed9416aa18523cf5636a5f2c12cf280a7563f1" address="unix:///run/containerd/s/a045c2131c530a353965e47ae642bdaca3910d1d5ac9c8def6979d60ee21bc7c" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:10.196990 containerd[1610]: time="2025-11-04T04:56:10.196935963Z" level=info msg="connecting to shim f12c6dc3518f6021fdee480f2bc8ec11872c916bbca84f1b56d4e182974e4178" address="unix:///run/containerd/s/470ae912e5750fb816985c060d35fe5a7762fe956a8a9cb0d45eecbd601d0c48" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:10.209916 systemd[1]: Started cri-containerd-fa10d576cb713b8e922e1e980a29dd019be47a15ae8603b07753e7f0ff24399c.scope - libcontainer container fa10d576cb713b8e922e1e980a29dd019be47a15ae8603b07753e7f0ff24399c. Nov 4 04:56:10.250915 systemd[1]: Started cri-containerd-c63e8df563a244bb1fd825e2e5ed9416aa18523cf5636a5f2c12cf280a7563f1.scope - libcontainer container c63e8df563a244bb1fd825e2e5ed9416aa18523cf5636a5f2c12cf280a7563f1. Nov 4 04:56:10.255323 systemd[1]: Started cri-containerd-f12c6dc3518f6021fdee480f2bc8ec11872c916bbca84f1b56d4e182974e4178.scope - libcontainer container f12c6dc3518f6021fdee480f2bc8ec11872c916bbca84f1b56d4e182974e4178. Nov 4 04:56:10.282800 kubelet[2409]: E1104 04:56:10.282443 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.15.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-15-13?timeout=10s\": dial tcp 172.232.15.13:6443: connect: connection refused" interval="800ms" Nov 4 04:56:10.306020 containerd[1610]: time="2025-11-04T04:56:10.305706814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-15-13,Uid:b178c3960a553838d104360f2473b7fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa10d576cb713b8e922e1e980a29dd019be47a15ae8603b07753e7f0ff24399c\"" Nov 4 04:56:10.308210 kubelet[2409]: E1104 04:56:10.308191 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:10.314811 containerd[1610]: time="2025-11-04T04:56:10.314740326Z" level=info msg="CreateContainer within sandbox \"fa10d576cb713b8e922e1e980a29dd019be47a15ae8603b07753e7f0ff24399c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 04:56:10.323705 containerd[1610]: time="2025-11-04T04:56:10.323676038Z" level=info msg="Container 3176362f5ee54f3b6fe9e0a57ca94fbf14bd30b39bd473353e3462bd592c3e84: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:10.330540 containerd[1610]: time="2025-11-04T04:56:10.330514469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-15-13,Uid:5c245969489a944ea2b5a55899ab0fa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c63e8df563a244bb1fd825e2e5ed9416aa18523cf5636a5f2c12cf280a7563f1\"" Nov 4 04:56:10.331211 containerd[1610]: time="2025-11-04T04:56:10.331116436Z" level=info msg="CreateContainer within sandbox \"fa10d576cb713b8e922e1e980a29dd019be47a15ae8603b07753e7f0ff24399c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3176362f5ee54f3b6fe9e0a57ca94fbf14bd30b39bd473353e3462bd592c3e84\"" Nov 4 04:56:10.332003 kubelet[2409]: E1104 04:56:10.331984 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:10.332471 containerd[1610]: time="2025-11-04T04:56:10.332451415Z" level=info msg="StartContainer for \"3176362f5ee54f3b6fe9e0a57ca94fbf14bd30b39bd473353e3462bd592c3e84\"" Nov 4 04:56:10.335515 containerd[1610]: time="2025-11-04T04:56:10.335494255Z" level=info msg="CreateContainer within sandbox \"c63e8df563a244bb1fd825e2e5ed9416aa18523cf5636a5f2c12cf280a7563f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 04:56:10.336019 containerd[1610]: time="2025-11-04T04:56:10.335998572Z" level=info msg="connecting to shim 3176362f5ee54f3b6fe9e0a57ca94fbf14bd30b39bd473353e3462bd592c3e84" address="unix:///run/containerd/s/7493c7b9bd7e449492a6aa6689536e59b80b18ed61afb9cba0a0402b78319c85" protocol=ttrpc version=3 Nov 4 04:56:10.352016 containerd[1610]: time="2025-11-04T04:56:10.351932181Z" level=info msg="Container 09288b3989d847a23623d5e59501c865da742f7e6348524a8246a665851d8452: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:10.364989 systemd[1]: Started cri-containerd-3176362f5ee54f3b6fe9e0a57ca94fbf14bd30b39bd473353e3462bd592c3e84.scope - libcontainer container 3176362f5ee54f3b6fe9e0a57ca94fbf14bd30b39bd473353e3462bd592c3e84. Nov 4 04:56:10.368582 containerd[1610]: time="2025-11-04T04:56:10.368527491Z" level=info msg="CreateContainer within sandbox \"c63e8df563a244bb1fd825e2e5ed9416aa18523cf5636a5f2c12cf280a7563f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09288b3989d847a23623d5e59501c865da742f7e6348524a8246a665851d8452\"" Nov 4 04:56:10.369211 containerd[1610]: time="2025-11-04T04:56:10.369192474Z" level=info msg="StartContainer for \"09288b3989d847a23623d5e59501c865da742f7e6348524a8246a665851d8452\"" Nov 4 04:56:10.370071 containerd[1610]: time="2025-11-04T04:56:10.370050336Z" level=info msg="connecting to shim 09288b3989d847a23623d5e59501c865da742f7e6348524a8246a665851d8452" address="unix:///run/containerd/s/a045c2131c530a353965e47ae642bdaca3910d1d5ac9c8def6979d60ee21bc7c" protocol=ttrpc version=3 Nov 4 04:56:10.396096 containerd[1610]: time="2025-11-04T04:56:10.395957443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-15-13,Uid:22da257c88279de139608874613f904a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f12c6dc3518f6021fdee480f2bc8ec11872c916bbca84f1b56d4e182974e4178\"" Nov 4 04:56:10.399309 kubelet[2409]: E1104 04:56:10.399291 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:10.401178 systemd[1]: Started cri-containerd-09288b3989d847a23623d5e59501c865da742f7e6348524a8246a665851d8452.scope - libcontainer container 09288b3989d847a23623d5e59501c865da742f7e6348524a8246a665851d8452. Nov 4 04:56:10.408054 containerd[1610]: time="2025-11-04T04:56:10.408020231Z" level=info msg="CreateContainer within sandbox \"f12c6dc3518f6021fdee480f2bc8ec11872c916bbca84f1b56d4e182974e4178\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 04:56:10.420893 containerd[1610]: time="2025-11-04T04:56:10.420411933Z" level=info msg="Container 5596a2b0f5662783ea37635e32948694e812cb853238784ee0e71bb7463734e0: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:10.426124 containerd[1610]: time="2025-11-04T04:56:10.425930651Z" level=info msg="CreateContainer within sandbox \"f12c6dc3518f6021fdee480f2bc8ec11872c916bbca84f1b56d4e182974e4178\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5596a2b0f5662783ea37635e32948694e812cb853238784ee0e71bb7463734e0\"" Nov 4 04:56:10.426324 containerd[1610]: time="2025-11-04T04:56:10.426286217Z" level=info msg="StartContainer for \"5596a2b0f5662783ea37635e32948694e812cb853238784ee0e71bb7463734e0\"" Nov 4 04:56:10.428564 containerd[1610]: time="2025-11-04T04:56:10.428532290Z" level=info msg="connecting to shim 5596a2b0f5662783ea37635e32948694e812cb853238784ee0e71bb7463734e0" address="unix:///run/containerd/s/470ae912e5750fb816985c060d35fe5a7762fe956a8a9cb0d45eecbd601d0c48" protocol=ttrpc version=3 Nov 4 04:56:10.464184 kubelet[2409]: I1104 04:56:10.464109 2409 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-13" Nov 4 04:56:10.464404 kubelet[2409]: E1104 04:56:10.464373 2409 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.15.13:6443/api/v1/nodes\": dial tcp 172.232.15.13:6443: connect: connection refused" node="172-232-15-13" Nov 4 04:56:10.473908 systemd[1]: Started cri-containerd-5596a2b0f5662783ea37635e32948694e812cb853238784ee0e71bb7463734e0.scope - libcontainer container 5596a2b0f5662783ea37635e32948694e812cb853238784ee0e71bb7463734e0. Nov 4 04:56:10.488558 containerd[1610]: time="2025-11-04T04:56:10.488533028Z" level=info msg="StartContainer for \"09288b3989d847a23623d5e59501c865da742f7e6348524a8246a665851d8452\" returns successfully" Nov 4 04:56:10.489844 containerd[1610]: time="2025-11-04T04:56:10.489756031Z" level=info msg="StartContainer for \"3176362f5ee54f3b6fe9e0a57ca94fbf14bd30b39bd473353e3462bd592c3e84\" returns successfully" Nov 4 04:56:10.697311 containerd[1610]: time="2025-11-04T04:56:10.696927661Z" level=info msg="StartContainer for \"5596a2b0f5662783ea37635e32948694e812cb853238784ee0e71bb7463734e0\" returns successfully" Nov 4 04:56:10.718097 kubelet[2409]: E1104 04:56:10.718067 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:10.718449 kubelet[2409]: E1104 04:56:10.718205 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:10.722503 kubelet[2409]: E1104 04:56:10.722476 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:10.722604 kubelet[2409]: E1104 04:56:10.722578 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:10.723282 kubelet[2409]: E1104 04:56:10.723257 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:10.723399 kubelet[2409]: E1104 04:56:10.723371 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:11.268500 kubelet[2409]: I1104 04:56:11.268467 2409 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-13" Nov 4 04:56:11.727177 kubelet[2409]: E1104 04:56:11.727107 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:11.729088 kubelet[2409]: E1104 04:56:11.727367 2409 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:11.729088 kubelet[2409]: E1104 04:56:11.727482 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:11.729088 kubelet[2409]: E1104 04:56:11.727624 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:12.243333 kubelet[2409]: E1104 04:56:12.243270 2409 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-15-13\" not found" node="172-232-15-13" Nov 4 04:56:12.376652 kubelet[2409]: I1104 04:56:12.376583 2409 kubelet_node_status.go:78] "Successfully registered node" node="172-232-15-13" Nov 4 04:56:12.378116 kubelet[2409]: I1104 04:56:12.378084 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:12.387429 kubelet[2409]: E1104 04:56:12.387328 2409 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-15-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:12.387429 kubelet[2409]: I1104 04:56:12.387347 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:12.389513 kubelet[2409]: E1104 04:56:12.389413 2409 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-15-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:12.389513 kubelet[2409]: I1104 04:56:12.389480 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-15-13" Nov 4 04:56:12.391860 kubelet[2409]: E1104 04:56:12.391829 2409 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-15-13\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-15-13" Nov 4 04:56:12.669848 kubelet[2409]: I1104 04:56:12.669820 2409 apiserver.go:52] "Watching apiserver" Nov 4 04:56:12.684678 kubelet[2409]: I1104 04:56:12.684624 2409 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:56:13.371529 kubelet[2409]: I1104 04:56:13.371462 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:13.378211 kubelet[2409]: E1104 04:56:13.378152 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:13.621337 kubelet[2409]: I1104 04:56:13.621278 2409 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-15-13" Nov 4 04:56:13.625638 kubelet[2409]: E1104 04:56:13.625552 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:13.728231 kubelet[2409]: E1104 04:56:13.727662 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:13.729229 kubelet[2409]: E1104 04:56:13.729194 2409 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:14.098385 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-7.scope)... Nov 4 04:56:14.098409 systemd[1]: Reloading... Nov 4 04:56:14.221836 zram_generator::config[2729]: No configuration found. Nov 4 04:56:14.456211 systemd[1]: Reloading finished in 357 ms. Nov 4 04:56:14.491986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:14.505839 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 04:56:14.506185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:14.506247 systemd[1]: kubelet.service: Consumed 934ms CPU time, 130.9M memory peak. Nov 4 04:56:14.508497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 04:56:14.696253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 04:56:14.708417 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 04:56:14.765347 kubelet[2780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:56:14.765347 kubelet[2780]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 04:56:14.765347 kubelet[2780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 04:56:14.766281 kubelet[2780]: I1104 04:56:14.765435 2780 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 04:56:14.778610 kubelet[2780]: I1104 04:56:14.778573 2780 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 04:56:14.778610 kubelet[2780]: I1104 04:56:14.778599 2780 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 04:56:14.779205 kubelet[2780]: I1104 04:56:14.778871 2780 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 04:56:14.780316 kubelet[2780]: I1104 04:56:14.780290 2780 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 04:56:14.782817 kubelet[2780]: I1104 04:56:14.782757 2780 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 04:56:14.788581 kubelet[2780]: I1104 04:56:14.788555 2780 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 04:56:14.796430 kubelet[2780]: I1104 04:56:14.796399 2780 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 04:56:14.796697 kubelet[2780]: I1104 04:56:14.796663 2780 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 04:56:14.798653 kubelet[2780]: I1104 04:56:14.796691 2780 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-15-13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 04:56:14.798653 kubelet[2780]: I1104 04:56:14.796999 2780 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 04:56:14.798653 kubelet[2780]: I1104 04:56:14.797009 2780 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 04:56:14.798653 kubelet[2780]: I1104 04:56:14.797071 2780 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:56:14.798653 kubelet[2780]: I1104 04:56:14.797282 2780 kubelet.go:480] "Attempting to sync node with API server" Nov 4 04:56:14.799013 kubelet[2780]: I1104 04:56:14.797296 2780 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 04:56:14.799013 kubelet[2780]: I1104 04:56:14.797419 2780 kubelet.go:386] "Adding apiserver pod source" Nov 4 04:56:14.799013 kubelet[2780]: I1104 04:56:14.797438 2780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 04:56:14.803086 kubelet[2780]: I1104 04:56:14.803066 2780 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 4 04:56:14.803698 kubelet[2780]: I1104 04:56:14.803681 2780 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 04:56:14.808106 kubelet[2780]: I1104 04:56:14.807922 2780 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 04:56:14.808197 kubelet[2780]: I1104 04:56:14.808187 2780 server.go:1289] "Started kubelet" Nov 4 04:56:14.809945 kubelet[2780]: I1104 04:56:14.809927 2780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 04:56:14.821162 kubelet[2780]: I1104 04:56:14.821130 2780 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 04:56:14.825226 kubelet[2780]: I1104 04:56:14.825186 2780 server.go:317] "Adding debug handlers to kubelet server" Nov 4 04:56:14.831821 kubelet[2780]: I1104 04:56:14.831717 2780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 04:56:14.832462 kubelet[2780]: I1104 04:56:14.832438 2780 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 04:56:14.833128 kubelet[2780]: I1104 04:56:14.833110 2780 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 04:56:14.835670 kubelet[2780]: I1104 04:56:14.835646 2780 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 04:56:14.838605 kubelet[2780]: I1104 04:56:14.837445 2780 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 04:56:14.838605 kubelet[2780]: I1104 04:56:14.837554 2780 reconciler.go:26] "Reconciler: start to sync state" Nov 4 04:56:14.838673 kubelet[2780]: I1104 04:56:14.838629 2780 factory.go:223] Registration of the systemd container factory successfully Nov 4 04:56:14.838795 kubelet[2780]: I1104 04:56:14.838701 2780 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 04:56:14.847154 kubelet[2780]: E1104 04:56:14.847136 2780 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 04:56:14.847382 kubelet[2780]: I1104 04:56:14.847367 2780 factory.go:223] Registration of the containerd container factory successfully Nov 4 04:56:14.851182 kubelet[2780]: I1104 04:56:14.851151 2780 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 04:56:14.853013 kubelet[2780]: I1104 04:56:14.852975 2780 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 04:56:14.853013 kubelet[2780]: I1104 04:56:14.852998 2780 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 04:56:14.853013 kubelet[2780]: I1104 04:56:14.853014 2780 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 04:56:14.853113 kubelet[2780]: I1104 04:56:14.853021 2780 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 04:56:14.853113 kubelet[2780]: E1104 04:56:14.853061 2780 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 04:56:14.932159 kubelet[2780]: I1104 04:56:14.932096 2780 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 04:56:14.932159 kubelet[2780]: I1104 04:56:14.932114 2780 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 04:56:14.932159 kubelet[2780]: I1104 04:56:14.932132 2780 state_mem.go:36] "Initialized new in-memory state store" Nov 4 04:56:14.932323 kubelet[2780]: I1104 04:56:14.932251 2780 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 04:56:14.932323 kubelet[2780]: I1104 04:56:14.932260 2780 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 04:56:14.932323 kubelet[2780]: I1104 04:56:14.932276 2780 policy_none.go:49] "None policy: Start" Nov 4 04:56:14.932323 kubelet[2780]: I1104 04:56:14.932285 2780 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 04:56:14.932323 kubelet[2780]: I1104 04:56:14.932295 2780 state_mem.go:35] "Initializing new in-memory state store" Nov 4 04:56:14.932428 kubelet[2780]: I1104 04:56:14.932373 2780 state_mem.go:75] "Updated machine memory state" Nov 4 04:56:14.938152 kubelet[2780]: E1104 04:56:14.937851 2780 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 04:56:14.938152 kubelet[2780]: I1104 04:56:14.938004 2780 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 04:56:14.938152 kubelet[2780]: I1104 04:56:14.938015 2780 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 04:56:14.938508 kubelet[2780]: I1104 04:56:14.938325 2780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 04:56:14.942291 kubelet[2780]: E1104 04:56:14.941503 2780 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 04:56:14.954104 kubelet[2780]: I1104 04:56:14.954060 2780 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-15-13" Nov 4 04:56:14.954818 kubelet[2780]: I1104 04:56:14.954792 2780 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:14.955050 kubelet[2780]: I1104 04:56:14.955025 2780 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:14.963626 kubelet[2780]: E1104 04:56:14.963551 2780 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-15-13\" already exists" pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:14.964982 kubelet[2780]: E1104 04:56:14.964965 2780 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-15-13\" already exists" pod="kube-system/kube-scheduler-172-232-15-13" Nov 4 04:56:15.046245 kubelet[2780]: I1104 04:56:15.046214 2780 kubelet_node_status.go:75] "Attempting to register node" node="172-232-15-13" Nov 4 04:56:15.057860 kubelet[2780]: I1104 04:56:15.057822 2780 kubelet_node_status.go:124] "Node was previously registered" node="172-232-15-13" Nov 4 04:56:15.058248 kubelet[2780]: I1104 04:56:15.057959 2780 kubelet_node_status.go:78] "Successfully registered node" node="172-232-15-13" Nov 4 04:56:15.097610 sudo[2816]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 4 04:56:15.098074 sudo[2816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 4 04:56:15.138683 kubelet[2780]: I1104 04:56:15.138327 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-ca-certs\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:15.138683 kubelet[2780]: I1104 04:56:15.138377 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-flexvolume-dir\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:15.138683 kubelet[2780]: I1104 04:56:15.138413 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-kubeconfig\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:15.138683 kubelet[2780]: I1104 04:56:15.138438 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b178c3960a553838d104360f2473b7fb-ca-certs\") pod \"kube-apiserver-172-232-15-13\" (UID: \"b178c3960a553838d104360f2473b7fb\") " pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:15.138683 kubelet[2780]: I1104 04:56:15.138473 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-k8s-certs\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:15.139307 kubelet[2780]: I1104 04:56:15.138504 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c245969489a944ea2b5a55899ab0fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-15-13\" (UID: \"5c245969489a944ea2b5a55899ab0fa6\") " pod="kube-system/kube-controller-manager-172-232-15-13" Nov 4 04:56:15.139307 kubelet[2780]: I1104 04:56:15.138530 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22da257c88279de139608874613f904a-kubeconfig\") pod \"kube-scheduler-172-232-15-13\" (UID: \"22da257c88279de139608874613f904a\") " pod="kube-system/kube-scheduler-172-232-15-13" Nov 4 04:56:15.139307 kubelet[2780]: I1104 04:56:15.138558 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b178c3960a553838d104360f2473b7fb-k8s-certs\") pod \"kube-apiserver-172-232-15-13\" (UID: \"b178c3960a553838d104360f2473b7fb\") " pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:15.139307 kubelet[2780]: I1104 04:56:15.138589 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b178c3960a553838d104360f2473b7fb-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-15-13\" (UID: \"b178c3960a553838d104360f2473b7fb\") " pod="kube-system/kube-apiserver-172-232-15-13" Nov 4 04:56:15.265806 kubelet[2780]: E1104 04:56:15.264628 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:15.265806 kubelet[2780]: E1104 04:56:15.265344 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:15.265806 kubelet[2780]: E1104 04:56:15.265434 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:15.454580 sudo[2816]: pam_unix(sudo:session): session closed for user root Nov 4 04:56:15.801361 kubelet[2780]: I1104 04:56:15.801309 2780 apiserver.go:52] "Watching apiserver" Nov 4 04:56:15.838196 kubelet[2780]: I1104 04:56:15.837958 2780 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 04:56:15.853522 kubelet[2780]: I1104 04:56:15.853429 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-15-13" podStartSLOduration=1.853416902 podStartE2EDuration="1.853416902s" podCreationTimestamp="2025-11-04 04:56:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:56:15.852585176 +0000 UTC m=+1.138604763" watchObservedRunningTime="2025-11-04 04:56:15.853416902 +0000 UTC m=+1.139436489" Nov 4 04:56:15.860480 kubelet[2780]: I1104 04:56:15.860435 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-15-13" podStartSLOduration=2.86042812 podStartE2EDuration="2.86042812s" podCreationTimestamp="2025-11-04 04:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:56:15.860088398 +0000 UTC m=+1.146107975" watchObservedRunningTime="2025-11-04 04:56:15.86042812 +0000 UTC m=+1.146447687" Nov 4 04:56:15.867307 kubelet[2780]: I1104 04:56:15.867266 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-15-13" podStartSLOduration=2.86726 podStartE2EDuration="2.86726s" podCreationTimestamp="2025-11-04 04:56:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:56:15.866811095 +0000 UTC m=+1.152830662" watchObservedRunningTime="2025-11-04 04:56:15.86726 +0000 UTC m=+1.153279567" Nov 4 04:56:15.892457 kubelet[2780]: E1104 04:56:15.892357 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:15.892728 kubelet[2780]: E1104 04:56:15.892659 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:15.893502 kubelet[2780]: E1104 04:56:15.893486 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:16.668587 sudo[1853]: pam_unix(sudo:session): session closed for user root Nov 4 04:56:16.721454 sshd[1852]: Connection closed by 139.178.89.65 port 35618 Nov 4 04:56:16.720707 sshd-session[1849]: pam_unix(sshd:session): session closed for user core Nov 4 04:56:16.726550 systemd[1]: sshd@6-172.232.15.13:22-139.178.89.65:35618.service: Deactivated successfully. Nov 4 04:56:16.729206 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 04:56:16.729444 systemd[1]: session-7.scope: Consumed 3.849s CPU time, 274M memory peak. Nov 4 04:56:16.732098 systemd-logind[1579]: Session 7 logged out. Waiting for processes to exit. Nov 4 04:56:16.734665 systemd-logind[1579]: Removed session 7. Nov 4 04:56:16.894940 kubelet[2780]: E1104 04:56:16.894501 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:16.894940 kubelet[2780]: E1104 04:56:16.894904 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:17.085419 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 4 04:56:19.523630 systemd[1]: Created slice kubepods-besteffort-podcfb4f7b8_26f5_4310_9653_981cce330a88.slice - libcontainer container kubepods-besteffort-podcfb4f7b8_26f5_4310_9653_981cce330a88.slice. Nov 4 04:56:19.547858 systemd[1]: Created slice kubepods-burstable-pod04eb90a0_0e18_402d_a5bb_5cde9e44fd0c.slice - libcontainer container kubepods-burstable-pod04eb90a0_0e18_402d_a5bb_5cde9e44fd0c.slice. Nov 4 04:56:19.551862 kubelet[2780]: I1104 04:56:19.550890 2780 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 04:56:19.552178 containerd[1610]: time="2025-11-04T04:56:19.551310061Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 04:56:19.552418 kubelet[2780]: I1104 04:56:19.552206 2780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 04:56:19.569323 kubelet[2780]: I1104 04:56:19.569289 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-hubble-tls\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.569393 kubelet[2780]: I1104 04:56:19.569357 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cfb4f7b8-26f5-4310-9653-981cce330a88-lib-modules\") pod \"kube-proxy-fwn7k\" (UID: \"cfb4f7b8-26f5-4310-9653-981cce330a88\") " pod="kube-system/kube-proxy-fwn7k" Nov 4 04:56:19.569393 kubelet[2780]: I1104 04:56:19.569379 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-run\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.569443 kubelet[2780]: I1104 04:56:19.569394 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-bpf-maps\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.569468 kubelet[2780]: I1104 04:56:19.569449 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-clustermesh-secrets\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.569491 kubelet[2780]: I1104 04:56:19.569465 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz9hf\" (UniqueName: \"kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-kube-api-access-rz9hf\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570049 kubelet[2780]: I1104 04:56:19.569524 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cfb4f7b8-26f5-4310-9653-981cce330a88-kube-proxy\") pod \"kube-proxy-fwn7k\" (UID: \"cfb4f7b8-26f5-4310-9653-981cce330a88\") " pod="kube-system/kube-proxy-fwn7k" Nov 4 04:56:19.570049 kubelet[2780]: I1104 04:56:19.569547 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-etc-cni-netd\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570049 kubelet[2780]: I1104 04:56:19.569560 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-lib-modules\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570049 kubelet[2780]: I1104 04:56:19.569616 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-xtables-lock\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570049 kubelet[2780]: I1104 04:56:19.569866 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cfb4f7b8-26f5-4310-9653-981cce330a88-xtables-lock\") pod \"kube-proxy-fwn7k\" (UID: \"cfb4f7b8-26f5-4310-9653-981cce330a88\") " pod="kube-system/kube-proxy-fwn7k" Nov 4 04:56:19.570049 kubelet[2780]: I1104 04:56:19.569922 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvnr\" (UniqueName: \"kubernetes.io/projected/cfb4f7b8-26f5-4310-9653-981cce330a88-kube-api-access-hcvnr\") pod \"kube-proxy-fwn7k\" (UID: \"cfb4f7b8-26f5-4310-9653-981cce330a88\") " pod="kube-system/kube-proxy-fwn7k" Nov 4 04:56:19.570228 kubelet[2780]: I1104 04:56:19.569970 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-hostproc\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570228 kubelet[2780]: I1104 04:56:19.570045 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-config-path\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570228 kubelet[2780]: I1104 04:56:19.570074 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-host-proc-sys-net\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570228 kubelet[2780]: I1104 04:56:19.570142 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-cgroup\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570228 kubelet[2780]: I1104 04:56:19.570166 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cni-path\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.570228 kubelet[2780]: I1104 04:56:19.570215 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-host-proc-sys-kernel\") pod \"cilium-sbn99\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " pod="kube-system/cilium-sbn99" Nov 4 04:56:19.680179 kubelet[2780]: E1104 04:56:19.680132 2780 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 4 04:56:19.681231 kubelet[2780]: E1104 04:56:19.680307 2780 projected.go:194] Error preparing data for projected volume kube-api-access-hcvnr for pod kube-system/kube-proxy-fwn7k: configmap "kube-root-ca.crt" not found Nov 4 04:56:19.681231 kubelet[2780]: E1104 04:56:19.680622 2780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cfb4f7b8-26f5-4310-9653-981cce330a88-kube-api-access-hcvnr podName:cfb4f7b8-26f5-4310-9653-981cce330a88 nodeName:}" failed. No retries permitted until 2025-11-04 04:56:20.180600105 +0000 UTC m=+5.466619672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hcvnr" (UniqueName: "kubernetes.io/projected/cfb4f7b8-26f5-4310-9653-981cce330a88-kube-api-access-hcvnr") pod "kube-proxy-fwn7k" (UID: "cfb4f7b8-26f5-4310-9653-981cce330a88") : configmap "kube-root-ca.crt" not found Nov 4 04:56:19.687311 kubelet[2780]: E1104 04:56:19.686700 2780 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 4 04:56:19.687596 kubelet[2780]: E1104 04:56:19.687505 2780 projected.go:194] Error preparing data for projected volume kube-api-access-rz9hf for pod kube-system/cilium-sbn99: configmap "kube-root-ca.crt" not found Nov 4 04:56:19.688894 kubelet[2780]: E1104 04:56:19.688872 2780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-kube-api-access-rz9hf podName:04eb90a0-0e18-402d-a5bb-5cde9e44fd0c nodeName:}" failed. No retries permitted until 2025-11-04 04:56:20.188848232 +0000 UTC m=+5.474867809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rz9hf" (UniqueName: "kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-kube-api-access-rz9hf") pod "cilium-sbn99" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c") : configmap "kube-root-ca.crt" not found Nov 4 04:56:20.414569 kubelet[2780]: E1104 04:56:20.414528 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:20.438595 kubelet[2780]: E1104 04:56:20.438368 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:20.439937 containerd[1610]: time="2025-11-04T04:56:20.439903703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fwn7k,Uid:cfb4f7b8-26f5-4310-9653-981cce330a88,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:20.456164 kubelet[2780]: E1104 04:56:20.456116 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:20.456445 containerd[1610]: time="2025-11-04T04:56:20.456419742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sbn99,Uid:04eb90a0-0e18-402d-a5bb-5cde9e44fd0c,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:20.478054 containerd[1610]: time="2025-11-04T04:56:20.477755040Z" level=info msg="connecting to shim dc7ed0bb748f0363b44002871be651c626cdb75497bdf12bbaaf568a6ccfac4f" address="unix:///run/containerd/s/6c017398999ea4b575e59b931c6c545524bf511976e4bc32c5cf5229491c82b2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:20.500045 containerd[1610]: time="2025-11-04T04:56:20.499966116Z" level=info msg="connecting to shim eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056" address="unix:///run/containerd/s/fd64790f1aa443aa582868332512141cc0d47cbb58d325bb3535d6447f08b659" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:20.536159 systemd[1]: Started cri-containerd-dc7ed0bb748f0363b44002871be651c626cdb75497bdf12bbaaf568a6ccfac4f.scope - libcontainer container dc7ed0bb748f0363b44002871be651c626cdb75497bdf12bbaaf568a6ccfac4f. Nov 4 04:56:20.545252 systemd[1]: Started cri-containerd-eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056.scope - libcontainer container eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056. Nov 4 04:56:20.561431 kubelet[2780]: E1104 04:56:20.561397 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:20.613968 containerd[1610]: time="2025-11-04T04:56:20.613687409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sbn99,Uid:04eb90a0-0e18-402d-a5bb-5cde9e44fd0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\"" Nov 4 04:56:20.616705 kubelet[2780]: E1104 04:56:20.616678 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:20.620099 containerd[1610]: time="2025-11-04T04:56:20.619673770Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 4 04:56:20.633370 containerd[1610]: time="2025-11-04T04:56:20.633346005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fwn7k,Uid:cfb4f7b8-26f5-4310-9653-981cce330a88,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc7ed0bb748f0363b44002871be651c626cdb75497bdf12bbaaf568a6ccfac4f\"" Nov 4 04:56:20.634440 kubelet[2780]: E1104 04:56:20.634414 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:20.640467 containerd[1610]: time="2025-11-04T04:56:20.640434242Z" level=info msg="CreateContainer within sandbox \"dc7ed0bb748f0363b44002871be651c626cdb75497bdf12bbaaf568a6ccfac4f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 04:56:20.649948 containerd[1610]: time="2025-11-04T04:56:20.649681999Z" level=info msg="Container ffb159a076c7bb1711a0b5159a17421c79c8e4800d5371b188405a74ca203ddb: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:20.655119 containerd[1610]: time="2025-11-04T04:56:20.655085074Z" level=info msg="CreateContainer within sandbox \"dc7ed0bb748f0363b44002871be651c626cdb75497bdf12bbaaf568a6ccfac4f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ffb159a076c7bb1711a0b5159a17421c79c8e4800d5371b188405a74ca203ddb\"" Nov 4 04:56:20.658597 containerd[1610]: time="2025-11-04T04:56:20.658481781Z" level=info msg="StartContainer for \"ffb159a076c7bb1711a0b5159a17421c79c8e4800d5371b188405a74ca203ddb\"" Nov 4 04:56:20.661700 containerd[1610]: time="2025-11-04T04:56:20.661650390Z" level=info msg="connecting to shim ffb159a076c7bb1711a0b5159a17421c79c8e4800d5371b188405a74ca203ddb" address="unix:///run/containerd/s/6c017398999ea4b575e59b931c6c545524bf511976e4bc32c5cf5229491c82b2" protocol=ttrpc version=3 Nov 4 04:56:20.683926 systemd[1]: Started cri-containerd-ffb159a076c7bb1711a0b5159a17421c79c8e4800d5371b188405a74ca203ddb.scope - libcontainer container ffb159a076c7bb1711a0b5159a17421c79c8e4800d5371b188405a74ca203ddb. Nov 4 04:56:20.727452 systemd[1]: Created slice kubepods-besteffort-podbc821d30_98ae_4341_9591_4068a1937a63.slice - libcontainer container kubepods-besteffort-podbc821d30_98ae_4341_9591_4068a1937a63.slice. Nov 4 04:56:20.771429 containerd[1610]: time="2025-11-04T04:56:20.771291791Z" level=info msg="StartContainer for \"ffb159a076c7bb1711a0b5159a17421c79c8e4800d5371b188405a74ca203ddb\" returns successfully" Nov 4 04:56:20.779108 kubelet[2780]: I1104 04:56:20.779076 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc821d30-98ae-4341-9591-4068a1937a63-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pgjkb\" (UID: \"bc821d30-98ae-4341-9591-4068a1937a63\") " pod="kube-system/cilium-operator-6c4d7847fc-pgjkb" Nov 4 04:56:20.779262 kubelet[2780]: I1104 04:56:20.779228 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n94wk\" (UniqueName: \"kubernetes.io/projected/bc821d30-98ae-4341-9591-4068a1937a63-kube-api-access-n94wk\") pod \"cilium-operator-6c4d7847fc-pgjkb\" (UID: \"bc821d30-98ae-4341-9591-4068a1937a63\") " pod="kube-system/cilium-operator-6c4d7847fc-pgjkb" Nov 4 04:56:20.914427 kubelet[2780]: E1104 04:56:20.914398 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:20.916039 kubelet[2780]: E1104 04:56:20.916011 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:20.916303 kubelet[2780]: E1104 04:56:20.916264 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:21.032082 kubelet[2780]: E1104 04:56:21.031472 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:21.033268 containerd[1610]: time="2025-11-04T04:56:21.033146376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pgjkb,Uid:bc821d30-98ae-4341-9591-4068a1937a63,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:21.053287 containerd[1610]: time="2025-11-04T04:56:21.053243429Z" level=info msg="connecting to shim c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541" address="unix:///run/containerd/s/2f09e346d78bc1971dac922a773d23607b874ab7a0696c4ef1cf5b7353fd5587" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:21.087478 systemd[1]: Started cri-containerd-c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541.scope - libcontainer container c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541. Nov 4 04:56:21.152117 containerd[1610]: time="2025-11-04T04:56:21.152064822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pgjkb,Uid:bc821d30-98ae-4341-9591-4068a1937a63,Namespace:kube-system,Attempt:0,} returns sandbox id \"c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541\"" Nov 4 04:56:21.153001 kubelet[2780]: E1104 04:56:21.152981 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:21.918263 kubelet[2780]: E1104 04:56:21.918190 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:23.107246 kubelet[2780]: E1104 04:56:23.107186 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:23.127736 kubelet[2780]: I1104 04:56:23.127664 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fwn7k" podStartSLOduration=4.127646926 podStartE2EDuration="4.127646926s" podCreationTimestamp="2025-11-04 04:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:56:20.940359615 +0000 UTC m=+6.226379182" watchObservedRunningTime="2025-11-04 04:56:23.127646926 +0000 UTC m=+8.413666493" Nov 4 04:56:23.924714 kubelet[2780]: E1104 04:56:23.924289 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:24.927902 kubelet[2780]: E1104 04:56:24.927589 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:27.439730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306292562.mount: Deactivated successfully. Nov 4 04:56:29.247567 containerd[1610]: time="2025-11-04T04:56:29.247519505Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:29.248501 containerd[1610]: time="2025-11-04T04:56:29.248467431Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=159274972" Nov 4 04:56:29.250792 containerd[1610]: time="2025-11-04T04:56:29.249193916Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:29.250982 containerd[1610]: time="2025-11-04T04:56:29.250955670Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.631255469s" Nov 4 04:56:29.251050 containerd[1610]: time="2025-11-04T04:56:29.251036404Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 4 04:56:29.252664 containerd[1610]: time="2025-11-04T04:56:29.252642541Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 4 04:56:29.255950 containerd[1610]: time="2025-11-04T04:56:29.255816184Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 04:56:29.263796 containerd[1610]: time="2025-11-04T04:56:29.262599040Z" level=info msg="Container 90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:29.266649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount116615429.mount: Deactivated successfully. Nov 4 04:56:29.271712 containerd[1610]: time="2025-11-04T04:56:29.271669046Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\"" Nov 4 04:56:29.272338 containerd[1610]: time="2025-11-04T04:56:29.272294736Z" level=info msg="StartContainer for \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\"" Nov 4 04:56:29.275219 containerd[1610]: time="2025-11-04T04:56:29.275150403Z" level=info msg="connecting to shim 90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e" address="unix:///run/containerd/s/fd64790f1aa443aa582868332512141cc0d47cbb58d325bb3535d6447f08b659" protocol=ttrpc version=3 Nov 4 04:56:29.309956 systemd[1]: Started cri-containerd-90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e.scope - libcontainer container 90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e. Nov 4 04:56:29.348248 containerd[1610]: time="2025-11-04T04:56:29.348156992Z" level=info msg="StartContainer for \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\" returns successfully" Nov 4 04:56:29.364213 systemd[1]: cri-containerd-90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e.scope: Deactivated successfully. Nov 4 04:56:29.366651 containerd[1610]: time="2025-11-04T04:56:29.366579468Z" level=info msg="received exit event container_id:\"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\" id:\"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\" pid:3204 exited_at:{seconds:1762232189 nanos:366114146}" Nov 4 04:56:29.393791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e-rootfs.mount: Deactivated successfully. Nov 4 04:56:29.940310 kubelet[2780]: E1104 04:56:29.940256 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:29.950898 containerd[1610]: time="2025-11-04T04:56:29.950861223Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 04:56:29.986203 containerd[1610]: time="2025-11-04T04:56:29.986151279Z" level=info msg="Container 799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:29.990206 containerd[1610]: time="2025-11-04T04:56:29.990178072Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\"" Nov 4 04:56:29.991325 containerd[1610]: time="2025-11-04T04:56:29.990836404Z" level=info msg="StartContainer for \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\"" Nov 4 04:56:29.992911 containerd[1610]: time="2025-11-04T04:56:29.992733665Z" level=info msg="connecting to shim 799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4" address="unix:///run/containerd/s/fd64790f1aa443aa582868332512141cc0d47cbb58d325bb3535d6447f08b659" protocol=ttrpc version=3 Nov 4 04:56:30.014925 systemd[1]: Started cri-containerd-799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4.scope - libcontainer container 799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4. Nov 4 04:56:30.074714 containerd[1610]: time="2025-11-04T04:56:30.074663594Z" level=info msg="StartContainer for \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\" returns successfully" Nov 4 04:56:30.095851 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 04:56:30.096718 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:56:30.096847 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:56:30.100713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 04:56:30.107436 systemd[1]: cri-containerd-799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4.scope: Deactivated successfully. Nov 4 04:56:30.108663 containerd[1610]: time="2025-11-04T04:56:30.108488628Z" level=info msg="received exit event container_id:\"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\" id:\"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\" pid:3251 exited_at:{seconds:1762232190 nanos:108177124}" Nov 4 04:56:30.125972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 04:56:30.334293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3347764173.mount: Deactivated successfully. Nov 4 04:56:30.945722 kubelet[2780]: E1104 04:56:30.945664 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:30.953837 containerd[1610]: time="2025-11-04T04:56:30.953285256Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 04:56:30.969393 containerd[1610]: time="2025-11-04T04:56:30.969353759Z" level=info msg="Container d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:30.977104 containerd[1610]: time="2025-11-04T04:56:30.977053560Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\"" Nov 4 04:56:30.977793 containerd[1610]: time="2025-11-04T04:56:30.977425647Z" level=info msg="StartContainer for \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\"" Nov 4 04:56:30.979717 containerd[1610]: time="2025-11-04T04:56:30.979692961Z" level=info msg="connecting to shim d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df" address="unix:///run/containerd/s/fd64790f1aa443aa582868332512141cc0d47cbb58d325bb3535d6447f08b659" protocol=ttrpc version=3 Nov 4 04:56:31.010905 systemd[1]: Started cri-containerd-d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df.scope - libcontainer container d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df. Nov 4 04:56:31.066728 containerd[1610]: time="2025-11-04T04:56:31.066662340Z" level=info msg="StartContainer for \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\" returns successfully" Nov 4 04:56:31.067657 systemd[1]: cri-containerd-d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df.scope: Deactivated successfully. Nov 4 04:56:31.069023 containerd[1610]: time="2025-11-04T04:56:31.068982461Z" level=info msg="received exit event container_id:\"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\" id:\"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\" pid:3303 exited_at:{seconds:1762232191 nanos:68123904}" Nov 4 04:56:31.609058 containerd[1610]: time="2025-11-04T04:56:31.609002640Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:31.609789 containerd[1610]: time="2025-11-04T04:56:31.609679219Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17532406" Nov 4 04:56:31.610291 containerd[1610]: time="2025-11-04T04:56:31.610255794Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 04:56:31.611734 containerd[1610]: time="2025-11-04T04:56:31.611601273Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.358844586s" Nov 4 04:56:31.611734 containerd[1610]: time="2025-11-04T04:56:31.611632404Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 4 04:56:31.615615 containerd[1610]: time="2025-11-04T04:56:31.615586085Z" level=info msg="CreateContainer within sandbox \"c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 4 04:56:31.627110 containerd[1610]: time="2025-11-04T04:56:31.625424712Z" level=info msg="Container 3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:31.634178 containerd[1610]: time="2025-11-04T04:56:31.634131980Z" level=info msg="CreateContainer within sandbox \"c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\"" Nov 4 04:56:31.635320 containerd[1610]: time="2025-11-04T04:56:31.635289680Z" level=info msg="StartContainer for \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\"" Nov 4 04:56:31.636839 containerd[1610]: time="2025-11-04T04:56:31.636810366Z" level=info msg="connecting to shim 3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c" address="unix:///run/containerd/s/2f09e346d78bc1971dac922a773d23607b874ab7a0696c4ef1cf5b7353fd5587" protocol=ttrpc version=3 Nov 4 04:56:31.662925 systemd[1]: Started cri-containerd-3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c.scope - libcontainer container 3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c. Nov 4 04:56:31.675940 update_engine[1582]: I20251104 04:56:31.675843 1582 update_attempter.cc:509] Updating boot flags... Nov 4 04:56:31.723131 containerd[1610]: time="2025-11-04T04:56:31.722819036Z" level=info msg="StartContainer for \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" returns successfully" Nov 4 04:56:31.968413 kubelet[2780]: E1104 04:56:31.966755 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:31.987701 kubelet[2780]: E1104 04:56:31.985409 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:31.995389 kubelet[2780]: I1104 04:56:31.995314 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pgjkb" podStartSLOduration=1.537252761 podStartE2EDuration="11.995101864s" podCreationTimestamp="2025-11-04 04:56:20 +0000 UTC" firstStartedPulling="2025-11-04 04:56:21.154487792 +0000 UTC m=+6.440507369" lastFinishedPulling="2025-11-04 04:56:31.612336905 +0000 UTC m=+16.898356472" observedRunningTime="2025-11-04 04:56:31.994535859 +0000 UTC m=+17.280555446" watchObservedRunningTime="2025-11-04 04:56:31.995101864 +0000 UTC m=+17.281121451" Nov 4 04:56:31.997869 containerd[1610]: time="2025-11-04T04:56:31.997822762Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 04:56:32.029883 containerd[1610]: time="2025-11-04T04:56:32.029819707Z" level=info msg="Container 1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:32.037822 containerd[1610]: time="2025-11-04T04:56:32.037107478Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\"" Nov 4 04:56:32.039793 containerd[1610]: time="2025-11-04T04:56:32.039742217Z" level=info msg="StartContainer for \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\"" Nov 4 04:56:32.040867 containerd[1610]: time="2025-11-04T04:56:32.040824211Z" level=info msg="connecting to shim 1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333" address="unix:///run/containerd/s/fd64790f1aa443aa582868332512141cc0d47cbb58d325bb3535d6447f08b659" protocol=ttrpc version=3 Nov 4 04:56:32.095151 systemd[1]: Started cri-containerd-1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333.scope - libcontainer container 1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333. Nov 4 04:56:32.202289 systemd[1]: cri-containerd-1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333.scope: Deactivated successfully. Nov 4 04:56:32.209896 containerd[1610]: time="2025-11-04T04:56:32.209566999Z" level=info msg="received exit event container_id:\"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\" id:\"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\" pid:3403 exited_at:{seconds:1762232192 nanos:205738321}" Nov 4 04:56:32.212868 containerd[1610]: time="2025-11-04T04:56:32.212831234Z" level=info msg="StartContainer for \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\" returns successfully" Nov 4 04:56:32.263390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176358004.mount: Deactivated successfully. Nov 4 04:56:32.994039 kubelet[2780]: E1104 04:56:32.993947 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:32.996485 kubelet[2780]: E1104 04:56:32.995474 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:33.001068 containerd[1610]: time="2025-11-04T04:56:33.001023074Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 04:56:33.019100 containerd[1610]: time="2025-11-04T04:56:33.019056972Z" level=info msg="Container 33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:33.024917 containerd[1610]: time="2025-11-04T04:56:33.024761176Z" level=info msg="CreateContainer within sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\"" Nov 4 04:56:33.026144 containerd[1610]: time="2025-11-04T04:56:33.026086958Z" level=info msg="StartContainer for \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\"" Nov 4 04:56:33.028261 containerd[1610]: time="2025-11-04T04:56:33.028213241Z" level=info msg="connecting to shim 33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350" address="unix:///run/containerd/s/fd64790f1aa443aa582868332512141cc0d47cbb58d325bb3535d6447f08b659" protocol=ttrpc version=3 Nov 4 04:56:33.050901 systemd[1]: Started cri-containerd-33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350.scope - libcontainer container 33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350. Nov 4 04:56:33.108649 containerd[1610]: time="2025-11-04T04:56:33.108581414Z" level=info msg="StartContainer for \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" returns successfully" Nov 4 04:56:33.237646 kubelet[2780]: I1104 04:56:33.237602 2780 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 04:56:33.272200 systemd[1]: Created slice kubepods-burstable-pod8edeb4ab_93d3_40aa_bcf7_009d2cf2ad8e.slice - libcontainer container kubepods-burstable-pod8edeb4ab_93d3_40aa_bcf7_009d2cf2ad8e.slice. Nov 4 04:56:33.283193 systemd[1]: Created slice kubepods-burstable-podffb11247_e671_4fa4_a01f_96ffae47d8f4.slice - libcontainer container kubepods-burstable-podffb11247_e671_4fa4_a01f_96ffae47d8f4.slice. Nov 4 04:56:33.363297 kubelet[2780]: I1104 04:56:33.363191 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffb11247-e671-4fa4-a01f-96ffae47d8f4-config-volume\") pod \"coredns-674b8bbfcf-f2kkm\" (UID: \"ffb11247-e671-4fa4-a01f-96ffae47d8f4\") " pod="kube-system/coredns-674b8bbfcf-f2kkm" Nov 4 04:56:33.363575 kubelet[2780]: I1104 04:56:33.363514 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8edeb4ab-93d3-40aa-bcf7-009d2cf2ad8e-config-volume\") pod \"coredns-674b8bbfcf-r9qqm\" (UID: \"8edeb4ab-93d3-40aa-bcf7-009d2cf2ad8e\") " pod="kube-system/coredns-674b8bbfcf-r9qqm" Nov 4 04:56:33.363697 kubelet[2780]: I1104 04:56:33.363680 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9cth\" (UniqueName: \"kubernetes.io/projected/8edeb4ab-93d3-40aa-bcf7-009d2cf2ad8e-kube-api-access-t9cth\") pod \"coredns-674b8bbfcf-r9qqm\" (UID: \"8edeb4ab-93d3-40aa-bcf7-009d2cf2ad8e\") " pod="kube-system/coredns-674b8bbfcf-r9qqm" Nov 4 04:56:33.363860 kubelet[2780]: I1104 04:56:33.363844 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6lt4\" (UniqueName: \"kubernetes.io/projected/ffb11247-e671-4fa4-a01f-96ffae47d8f4-kube-api-access-b6lt4\") pod \"coredns-674b8bbfcf-f2kkm\" (UID: \"ffb11247-e671-4fa4-a01f-96ffae47d8f4\") " pod="kube-system/coredns-674b8bbfcf-f2kkm" Nov 4 04:56:33.581843 kubelet[2780]: E1104 04:56:33.581262 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:33.583489 containerd[1610]: time="2025-11-04T04:56:33.583408505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9qqm,Uid:8edeb4ab-93d3-40aa-bcf7-009d2cf2ad8e,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:33.588170 kubelet[2780]: E1104 04:56:33.588048 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:33.589450 containerd[1610]: time="2025-11-04T04:56:33.589362828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2kkm,Uid:ffb11247-e671-4fa4-a01f-96ffae47d8f4,Namespace:kube-system,Attempt:0,}" Nov 4 04:56:34.006177 kubelet[2780]: E1104 04:56:34.006118 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:34.022035 kubelet[2780]: I1104 04:56:34.021976 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sbn99" podStartSLOduration=6.38934982 podStartE2EDuration="15.021960772s" podCreationTimestamp="2025-11-04 04:56:19 +0000 UTC" firstStartedPulling="2025-11-04 04:56:20.61942375 +0000 UTC m=+5.905443317" lastFinishedPulling="2025-11-04 04:56:29.252034692 +0000 UTC m=+14.538054269" observedRunningTime="2025-11-04 04:56:34.019663016 +0000 UTC m=+19.305682603" watchObservedRunningTime="2025-11-04 04:56:34.021960772 +0000 UTC m=+19.307980369" Nov 4 04:56:35.006663 kubelet[2780]: E1104 04:56:35.006624 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:35.496871 systemd-networkd[1499]: cilium_host: Link UP Nov 4 04:56:35.498667 systemd-networkd[1499]: cilium_net: Link UP Nov 4 04:56:35.498936 systemd-networkd[1499]: cilium_host: Gained carrier Nov 4 04:56:35.499147 systemd-networkd[1499]: cilium_net: Gained carrier Nov 4 04:56:35.618365 systemd-networkd[1499]: cilium_vxlan: Link UP Nov 4 04:56:35.618375 systemd-networkd[1499]: cilium_vxlan: Gained carrier Nov 4 04:56:35.838337 kernel: NET: Registered PF_ALG protocol family Nov 4 04:56:36.011604 kubelet[2780]: E1104 04:56:36.011552 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:36.089929 systemd-networkd[1499]: cilium_host: Gained IPv6LL Nov 4 04:56:36.154043 systemd-networkd[1499]: cilium_net: Gained IPv6LL Nov 4 04:56:36.537409 systemd-networkd[1499]: lxc_health: Link UP Nov 4 04:56:36.555418 systemd-networkd[1499]: lxc_health: Gained carrier Nov 4 04:56:36.985965 systemd-networkd[1499]: cilium_vxlan: Gained IPv6LL Nov 4 04:56:37.144806 kernel: eth0: renamed from tmp6a3bd Nov 4 04:56:37.148083 systemd-networkd[1499]: lxc2e32859ebb34: Link UP Nov 4 04:56:37.148446 systemd-networkd[1499]: lxc2e32859ebb34: Gained carrier Nov 4 04:56:37.172962 kernel: eth0: renamed from tmpf8ace Nov 4 04:56:37.173549 systemd-networkd[1499]: lxc47c8fbddc865: Link UP Nov 4 04:56:37.178499 systemd-networkd[1499]: lxc47c8fbddc865: Gained carrier Nov 4 04:56:37.562950 systemd-networkd[1499]: lxc_health: Gained IPv6LL Nov 4 04:56:38.459533 kubelet[2780]: E1104 04:56:38.459481 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:39.019173 kubelet[2780]: E1104 04:56:39.018048 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:39.034278 systemd-networkd[1499]: lxc2e32859ebb34: Gained IPv6LL Nov 4 04:56:39.161954 systemd-networkd[1499]: lxc47c8fbddc865: Gained IPv6LL Nov 4 04:56:40.019401 kubelet[2780]: E1104 04:56:40.019354 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:40.436811 containerd[1610]: time="2025-11-04T04:56:40.436747909Z" level=info msg="connecting to shim f8ace69844c22025f0774ca8ab951add4c1dfe55662377db6bbbdabe595cecfa" address="unix:///run/containerd/s/e6098f0a96192d8b597f0399ef75583000ee3c6978a31cd381e6b38e3842a477" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:40.485803 containerd[1610]: time="2025-11-04T04:56:40.485468509Z" level=info msg="connecting to shim 6a3bdbd7d82673627e4146c6bb55fe87eb15c784affcb096e69ca6b1b3b1a8cb" address="unix:///run/containerd/s/c44c1e9c922dcdc7c24126ac0c4baf7a308458dc8086c8569296aa7d830bbd79" namespace=k8s.io protocol=ttrpc version=3 Nov 4 04:56:40.489579 systemd[1]: Started cri-containerd-f8ace69844c22025f0774ca8ab951add4c1dfe55662377db6bbbdabe595cecfa.scope - libcontainer container f8ace69844c22025f0774ca8ab951add4c1dfe55662377db6bbbdabe595cecfa. Nov 4 04:56:40.522966 systemd[1]: Started cri-containerd-6a3bdbd7d82673627e4146c6bb55fe87eb15c784affcb096e69ca6b1b3b1a8cb.scope - libcontainer container 6a3bdbd7d82673627e4146c6bb55fe87eb15c784affcb096e69ca6b1b3b1a8cb. Nov 4 04:56:40.612432 containerd[1610]: time="2025-11-04T04:56:40.612392086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f2kkm,Uid:ffb11247-e671-4fa4-a01f-96ffae47d8f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8ace69844c22025f0774ca8ab951add4c1dfe55662377db6bbbdabe595cecfa\"" Nov 4 04:56:40.615295 kubelet[2780]: E1104 04:56:40.615275 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:40.623532 containerd[1610]: time="2025-11-04T04:56:40.623444969Z" level=info msg="CreateContainer within sandbox \"f8ace69844c22025f0774ca8ab951add4c1dfe55662377db6bbbdabe595cecfa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:56:40.633693 containerd[1610]: time="2025-11-04T04:56:40.633671319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9qqm,Uid:8edeb4ab-93d3-40aa-bcf7-009d2cf2ad8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a3bdbd7d82673627e4146c6bb55fe87eb15c784affcb096e69ca6b1b3b1a8cb\"" Nov 4 04:56:40.635053 kubelet[2780]: E1104 04:56:40.635024 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:40.636454 containerd[1610]: time="2025-11-04T04:56:40.636094278Z" level=info msg="Container 96b41b479fe0fe57b85b0cb9095d6d26f535800489ee1364def9db64479fb639: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:40.643113 containerd[1610]: time="2025-11-04T04:56:40.642323934Z" level=info msg="CreateContainer within sandbox \"f8ace69844c22025f0774ca8ab951add4c1dfe55662377db6bbbdabe595cecfa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"96b41b479fe0fe57b85b0cb9095d6d26f535800489ee1364def9db64479fb639\"" Nov 4 04:56:40.645306 containerd[1610]: time="2025-11-04T04:56:40.645250067Z" level=info msg="CreateContainer within sandbox \"6a3bdbd7d82673627e4146c6bb55fe87eb15c784affcb096e69ca6b1b3b1a8cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 04:56:40.645455 containerd[1610]: time="2025-11-04T04:56:40.645438422Z" level=info msg="StartContainer for \"96b41b479fe0fe57b85b0cb9095d6d26f535800489ee1364def9db64479fb639\"" Nov 4 04:56:40.647798 containerd[1610]: time="2025-11-04T04:56:40.646494822Z" level=info msg="connecting to shim 96b41b479fe0fe57b85b0cb9095d6d26f535800489ee1364def9db64479fb639" address="unix:///run/containerd/s/e6098f0a96192d8b597f0399ef75583000ee3c6978a31cd381e6b38e3842a477" protocol=ttrpc version=3 Nov 4 04:56:40.662406 containerd[1610]: time="2025-11-04T04:56:40.662383853Z" level=info msg="Container f18fee5249736fccfd4bf7c1f6f117fd5abbd1f0413450ca9995e787103231ff: CDI devices from CRI Config.CDIDevices: []" Nov 4 04:56:40.668732 containerd[1610]: time="2025-11-04T04:56:40.668707762Z" level=info msg="CreateContainer within sandbox \"6a3bdbd7d82673627e4146c6bb55fe87eb15c784affcb096e69ca6b1b3b1a8cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f18fee5249736fccfd4bf7c1f6f117fd5abbd1f0413450ca9995e787103231ff\"" Nov 4 04:56:40.671072 containerd[1610]: time="2025-11-04T04:56:40.671053128Z" level=info msg="StartContainer for \"f18fee5249736fccfd4bf7c1f6f117fd5abbd1f0413450ca9995e787103231ff\"" Nov 4 04:56:40.671857 containerd[1610]: time="2025-11-04T04:56:40.671836050Z" level=info msg="connecting to shim f18fee5249736fccfd4bf7c1f6f117fd5abbd1f0413450ca9995e787103231ff" address="unix:///run/containerd/s/c44c1e9c922dcdc7c24126ac0c4baf7a308458dc8086c8569296aa7d830bbd79" protocol=ttrpc version=3 Nov 4 04:56:40.683921 systemd[1]: Started cri-containerd-96b41b479fe0fe57b85b0cb9095d6d26f535800489ee1364def9db64479fb639.scope - libcontainer container 96b41b479fe0fe57b85b0cb9095d6d26f535800489ee1364def9db64479fb639. Nov 4 04:56:40.698912 systemd[1]: Started cri-containerd-f18fee5249736fccfd4bf7c1f6f117fd5abbd1f0413450ca9995e787103231ff.scope - libcontainer container f18fee5249736fccfd4bf7c1f6f117fd5abbd1f0413450ca9995e787103231ff. Nov 4 04:56:40.753617 containerd[1610]: time="2025-11-04T04:56:40.753563966Z" level=info msg="StartContainer for \"96b41b479fe0fe57b85b0cb9095d6d26f535800489ee1364def9db64479fb639\" returns successfully" Nov 4 04:56:40.775420 containerd[1610]: time="2025-11-04T04:56:40.775303502Z" level=info msg="StartContainer for \"f18fee5249736fccfd4bf7c1f6f117fd5abbd1f0413450ca9995e787103231ff\" returns successfully" Nov 4 04:56:41.022917 kubelet[2780]: E1104 04:56:41.022644 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:41.027631 kubelet[2780]: E1104 04:56:41.027607 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:41.037692 kubelet[2780]: I1104 04:56:41.037289 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r9qqm" podStartSLOduration=21.037279373 podStartE2EDuration="21.037279373s" podCreationTimestamp="2025-11-04 04:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:56:41.035985488 +0000 UTC m=+26.322005075" watchObservedRunningTime="2025-11-04 04:56:41.037279373 +0000 UTC m=+26.323298950" Nov 4 04:56:41.067189 kubelet[2780]: I1104 04:56:41.067117 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f2kkm" podStartSLOduration=21.067104142 podStartE2EDuration="21.067104142s" podCreationTimestamp="2025-11-04 04:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 04:56:41.065420477 +0000 UTC m=+26.351440064" watchObservedRunningTime="2025-11-04 04:56:41.067104142 +0000 UTC m=+26.353123709" Nov 4 04:56:42.028807 kubelet[2780]: E1104 04:56:42.028578 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:42.028807 kubelet[2780]: E1104 04:56:42.028647 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:43.031367 kubelet[2780]: E1104 04:56:43.031277 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:56:43.032729 kubelet[2780]: E1104 04:56:43.032633 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:57:26.855056 kubelet[2780]: E1104 04:57:26.854546 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:57:29.854335 kubelet[2780]: E1104 04:57:29.854007 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:57:33.853715 kubelet[2780]: E1104 04:57:33.853586 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:57:43.853645 kubelet[2780]: E1104 04:57:43.853559 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:57:44.854589 kubelet[2780]: E1104 04:57:44.854229 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:57:52.854815 kubelet[2780]: E1104 04:57:52.854182 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:57:57.854014 kubelet[2780]: E1104 04:57:57.853962 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:58:07.854558 kubelet[2780]: E1104 04:58:07.854508 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:58:40.854637 kubelet[2780]: E1104 04:58:40.853932 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:58:43.853948 kubelet[2780]: E1104 04:58:43.853902 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:58:48.855322 kubelet[2780]: E1104 04:58:48.854938 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:58:51.854328 kubelet[2780]: E1104 04:58:51.854243 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:58:54.854333 kubelet[2780]: E1104 04:58:54.854041 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:59:02.164492 systemd[1]: Started sshd@7-172.232.15.13:22-139.178.89.65:38796.service - OpenSSH per-connection server daemon (139.178.89.65:38796). Nov 4 04:59:02.480911 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 38796 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:02.483616 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:02.489876 systemd-logind[1579]: New session 8 of user core. Nov 4 04:59:02.493917 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 04:59:02.716487 sshd[4118]: Connection closed by 139.178.89.65 port 38796 Nov 4 04:59:02.717392 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:02.723200 systemd-logind[1579]: Session 8 logged out. Waiting for processes to exit. Nov 4 04:59:02.723730 systemd[1]: sshd@7-172.232.15.13:22-139.178.89.65:38796.service: Deactivated successfully. Nov 4 04:59:02.726159 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 04:59:02.728280 systemd-logind[1579]: Removed session 8. Nov 4 04:59:07.777328 systemd[1]: Started sshd@8-172.232.15.13:22-139.178.89.65:39856.service - OpenSSH per-connection server daemon (139.178.89.65:39856). Nov 4 04:59:08.074368 sshd[4131]: Accepted publickey for core from 139.178.89.65 port 39856 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:08.075663 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:08.080587 systemd-logind[1579]: New session 9 of user core. Nov 4 04:59:08.086925 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 04:59:08.304115 sshd[4134]: Connection closed by 139.178.89.65 port 39856 Nov 4 04:59:08.305067 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:08.310986 systemd[1]: sshd@8-172.232.15.13:22-139.178.89.65:39856.service: Deactivated successfully. Nov 4 04:59:08.313716 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 04:59:08.315062 systemd-logind[1579]: Session 9 logged out. Waiting for processes to exit. Nov 4 04:59:08.317281 systemd-logind[1579]: Removed session 9. Nov 4 04:59:09.853695 kubelet[2780]: E1104 04:59:09.853657 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:59:13.371598 systemd[1]: Started sshd@9-172.232.15.13:22-139.178.89.65:39868.service - OpenSSH per-connection server daemon (139.178.89.65:39868). Nov 4 04:59:13.679502 sshd[4147]: Accepted publickey for core from 139.178.89.65 port 39868 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:13.681647 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:13.688844 systemd-logind[1579]: New session 10 of user core. Nov 4 04:59:13.695935 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 04:59:13.900959 sshd[4150]: Connection closed by 139.178.89.65 port 39868 Nov 4 04:59:13.901587 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:13.905973 systemd-logind[1579]: Session 10 logged out. Waiting for processes to exit. Nov 4 04:59:13.906293 systemd[1]: sshd@9-172.232.15.13:22-139.178.89.65:39868.service: Deactivated successfully. Nov 4 04:59:13.908583 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 04:59:13.910646 systemd-logind[1579]: Removed session 10. Nov 4 04:59:18.975495 systemd[1]: Started sshd@10-172.232.15.13:22-139.178.89.65:60882.service - OpenSSH per-connection server daemon (139.178.89.65:60882). Nov 4 04:59:19.281334 sshd[4165]: Accepted publickey for core from 139.178.89.65 port 60882 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:19.282824 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:19.291950 systemd-logind[1579]: New session 11 of user core. Nov 4 04:59:19.297906 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 04:59:19.508760 sshd[4168]: Connection closed by 139.178.89.65 port 60882 Nov 4 04:59:19.509319 sshd-session[4165]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:19.514474 systemd[1]: sshd@10-172.232.15.13:22-139.178.89.65:60882.service: Deactivated successfully. Nov 4 04:59:19.517315 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 04:59:19.518834 systemd-logind[1579]: Session 11 logged out. Waiting for processes to exit. Nov 4 04:59:19.520690 systemd-logind[1579]: Removed session 11. Nov 4 04:59:24.592472 systemd[1]: Started sshd@11-172.232.15.13:22-139.178.89.65:60896.service - OpenSSH per-connection server daemon (139.178.89.65:60896). Nov 4 04:59:24.910739 sshd[4183]: Accepted publickey for core from 139.178.89.65 port 60896 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:24.912259 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:24.917514 systemd-logind[1579]: New session 12 of user core. Nov 4 04:59:24.921916 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 04:59:25.139384 sshd[4186]: Connection closed by 139.178.89.65 port 60896 Nov 4 04:59:25.139930 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:25.143525 systemd[1]: sshd@11-172.232.15.13:22-139.178.89.65:60896.service: Deactivated successfully. Nov 4 04:59:25.145852 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 04:59:25.147404 systemd-logind[1579]: Session 12 logged out. Waiting for processes to exit. Nov 4 04:59:25.149045 systemd-logind[1579]: Removed session 12. Nov 4 04:59:25.854829 kubelet[2780]: E1104 04:59:25.854542 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:59:25.855681 kubelet[2780]: E1104 04:59:25.855563 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:59:30.205508 systemd[1]: Started sshd@12-172.232.15.13:22-139.178.89.65:38918.service - OpenSSH per-connection server daemon (139.178.89.65:38918). Nov 4 04:59:30.512944 sshd[4199]: Accepted publickey for core from 139.178.89.65 port 38918 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:30.514962 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:30.520831 systemd-logind[1579]: New session 13 of user core. Nov 4 04:59:30.526024 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 04:59:30.732040 sshd[4202]: Connection closed by 139.178.89.65 port 38918 Nov 4 04:59:30.733146 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:30.741020 systemd[1]: sshd@12-172.232.15.13:22-139.178.89.65:38918.service: Deactivated successfully. Nov 4 04:59:30.744356 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 04:59:30.746108 systemd-logind[1579]: Session 13 logged out. Waiting for processes to exit. Nov 4 04:59:30.748628 systemd-logind[1579]: Removed session 13. Nov 4 04:59:30.792357 systemd[1]: Started sshd@13-172.232.15.13:22-139.178.89.65:38926.service - OpenSSH per-connection server daemon (139.178.89.65:38926). Nov 4 04:59:31.093073 sshd[4214]: Accepted publickey for core from 139.178.89.65 port 38926 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:31.095033 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:31.105861 systemd-logind[1579]: New session 14 of user core. Nov 4 04:59:31.116955 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 04:59:31.359090 sshd[4217]: Connection closed by 139.178.89.65 port 38926 Nov 4 04:59:31.359984 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:31.364593 systemd[1]: sshd@13-172.232.15.13:22-139.178.89.65:38926.service: Deactivated successfully. Nov 4 04:59:31.367071 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 04:59:31.368482 systemd-logind[1579]: Session 14 logged out. Waiting for processes to exit. Nov 4 04:59:31.370017 systemd-logind[1579]: Removed session 14. Nov 4 04:59:31.423179 systemd[1]: Started sshd@14-172.232.15.13:22-139.178.89.65:38930.service - OpenSSH per-connection server daemon (139.178.89.65:38930). Nov 4 04:59:31.716002 sshd[4227]: Accepted publickey for core from 139.178.89.65 port 38930 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:31.717738 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:31.723257 systemd-logind[1579]: New session 15 of user core. Nov 4 04:59:31.732941 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 04:59:31.931475 sshd[4230]: Connection closed by 139.178.89.65 port 38930 Nov 4 04:59:31.932993 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:31.937808 systemd[1]: sshd@14-172.232.15.13:22-139.178.89.65:38930.service: Deactivated successfully. Nov 4 04:59:31.940243 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 04:59:31.942719 systemd-logind[1579]: Session 15 logged out. Waiting for processes to exit. Nov 4 04:59:31.944316 systemd-logind[1579]: Removed session 15. Nov 4 04:59:37.000142 systemd[1]: Started sshd@15-172.232.15.13:22-139.178.89.65:46422.service - OpenSSH per-connection server daemon (139.178.89.65:46422). Nov 4 04:59:37.312958 sshd[4242]: Accepted publickey for core from 139.178.89.65 port 46422 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:37.314310 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:37.325047 systemd-logind[1579]: New session 16 of user core. Nov 4 04:59:37.329908 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 04:59:37.537094 sshd[4245]: Connection closed by 139.178.89.65 port 46422 Nov 4 04:59:37.537689 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:37.542884 systemd[1]: sshd@15-172.232.15.13:22-139.178.89.65:46422.service: Deactivated successfully. Nov 4 04:59:37.546337 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 04:59:37.549173 systemd-logind[1579]: Session 16 logged out. Waiting for processes to exit. Nov 4 04:59:37.551097 systemd-logind[1579]: Removed session 16. Nov 4 04:59:42.597011 systemd[1]: Started sshd@16-172.232.15.13:22-139.178.89.65:46424.service - OpenSSH per-connection server daemon (139.178.89.65:46424). Nov 4 04:59:42.901744 sshd[4257]: Accepted publickey for core from 139.178.89.65 port 46424 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:42.903999 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:42.910672 systemd-logind[1579]: New session 17 of user core. Nov 4 04:59:42.916919 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 04:59:43.130305 sshd[4260]: Connection closed by 139.178.89.65 port 46424 Nov 4 04:59:43.131495 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:43.137190 systemd[1]: sshd@16-172.232.15.13:22-139.178.89.65:46424.service: Deactivated successfully. Nov 4 04:59:43.140039 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 04:59:43.143602 systemd-logind[1579]: Session 17 logged out. Waiting for processes to exit. Nov 4 04:59:43.145475 systemd-logind[1579]: Removed session 17. Nov 4 04:59:44.854636 kubelet[2780]: E1104 04:59:44.854275 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 04:59:48.192178 systemd[1]: Started sshd@17-172.232.15.13:22-139.178.89.65:32870.service - OpenSSH per-connection server daemon (139.178.89.65:32870). Nov 4 04:59:48.505433 sshd[4272]: Accepted publickey for core from 139.178.89.65 port 32870 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:48.507190 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:48.512698 systemd-logind[1579]: New session 18 of user core. Nov 4 04:59:48.523922 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 04:59:48.726209 sshd[4275]: Connection closed by 139.178.89.65 port 32870 Nov 4 04:59:48.727164 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:48.732218 systemd[1]: sshd@17-172.232.15.13:22-139.178.89.65:32870.service: Deactivated successfully. Nov 4 04:59:48.735190 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 04:59:48.736411 systemd-logind[1579]: Session 18 logged out. Waiting for processes to exit. Nov 4 04:59:48.738227 systemd-logind[1579]: Removed session 18. Nov 4 04:59:48.793491 systemd[1]: Started sshd@18-172.232.15.13:22-139.178.89.65:32872.service - OpenSSH per-connection server daemon (139.178.89.65:32872). Nov 4 04:59:49.106128 sshd[4287]: Accepted publickey for core from 139.178.89.65 port 32872 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:49.107527 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:49.115101 systemd-logind[1579]: New session 19 of user core. Nov 4 04:59:49.119915 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 04:59:49.350395 sshd[4290]: Connection closed by 139.178.89.65 port 32872 Nov 4 04:59:49.351076 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:49.355898 systemd[1]: sshd@18-172.232.15.13:22-139.178.89.65:32872.service: Deactivated successfully. Nov 4 04:59:49.358920 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 04:59:49.360336 systemd-logind[1579]: Session 19 logged out. Waiting for processes to exit. Nov 4 04:59:49.361710 systemd-logind[1579]: Removed session 19. Nov 4 04:59:49.425329 systemd[1]: Started sshd@19-172.232.15.13:22-139.178.89.65:32874.service - OpenSSH per-connection server daemon (139.178.89.65:32874). Nov 4 04:59:49.722837 sshd[4300]: Accepted publickey for core from 139.178.89.65 port 32874 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:49.725169 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:49.731446 systemd-logind[1579]: New session 20 of user core. Nov 4 04:59:49.738143 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 04:59:50.540645 sshd[4303]: Connection closed by 139.178.89.65 port 32874 Nov 4 04:59:50.541968 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:50.546912 systemd[1]: sshd@19-172.232.15.13:22-139.178.89.65:32874.service: Deactivated successfully. Nov 4 04:59:50.550021 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 04:59:50.551279 systemd-logind[1579]: Session 20 logged out. Waiting for processes to exit. Nov 4 04:59:50.553580 systemd-logind[1579]: Removed session 20. Nov 4 04:59:50.606389 systemd[1]: Started sshd@20-172.232.15.13:22-139.178.89.65:32884.service - OpenSSH per-connection server daemon (139.178.89.65:32884). Nov 4 04:59:50.921437 sshd[4320]: Accepted publickey for core from 139.178.89.65 port 32884 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:50.923448 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:50.931689 systemd-logind[1579]: New session 21 of user core. Nov 4 04:59:50.938131 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 04:59:51.265301 sshd[4323]: Connection closed by 139.178.89.65 port 32884 Nov 4 04:59:51.267192 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:51.276580 systemd-logind[1579]: Session 21 logged out. Waiting for processes to exit. Nov 4 04:59:51.276812 systemd[1]: sshd@20-172.232.15.13:22-139.178.89.65:32884.service: Deactivated successfully. Nov 4 04:59:51.279440 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 04:59:51.282514 systemd-logind[1579]: Removed session 21. Nov 4 04:59:51.325163 systemd[1]: Started sshd@21-172.232.15.13:22-139.178.89.65:32886.service - OpenSSH per-connection server daemon (139.178.89.65:32886). Nov 4 04:59:51.619149 sshd[4336]: Accepted publickey for core from 139.178.89.65 port 32886 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:51.621396 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:51.626179 systemd-logind[1579]: New session 22 of user core. Nov 4 04:59:51.636974 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 04:59:51.844662 sshd[4339]: Connection closed by 139.178.89.65 port 32886 Nov 4 04:59:51.844114 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:51.854037 systemd[1]: sshd@21-172.232.15.13:22-139.178.89.65:32886.service: Deactivated successfully. Nov 4 04:59:51.857670 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 04:59:51.859008 systemd-logind[1579]: Session 22 logged out. Waiting for processes to exit. Nov 4 04:59:51.861366 systemd-logind[1579]: Removed session 22. Nov 4 04:59:56.915131 systemd[1]: Started sshd@22-172.232.15.13:22-139.178.89.65:40476.service - OpenSSH per-connection server daemon (139.178.89.65:40476). Nov 4 04:59:57.215944 sshd[4350]: Accepted publickey for core from 139.178.89.65 port 40476 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 04:59:57.217748 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 04:59:57.222938 systemd-logind[1579]: New session 23 of user core. Nov 4 04:59:57.230932 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 04:59:57.445249 sshd[4353]: Connection closed by 139.178.89.65 port 40476 Nov 4 04:59:57.445994 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Nov 4 04:59:57.451309 systemd[1]: sshd@22-172.232.15.13:22-139.178.89.65:40476.service: Deactivated successfully. Nov 4 04:59:57.453708 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 04:59:57.454539 systemd-logind[1579]: Session 23 logged out. Waiting for processes to exit. Nov 4 04:59:57.456521 systemd-logind[1579]: Removed session 23. Nov 4 04:59:58.853896 kubelet[2780]: E1104 04:59:58.853600 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:01.854148 kubelet[2780]: E1104 05:00:01.854106 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:02.514477 systemd[1]: Started sshd@23-172.232.15.13:22-139.178.89.65:40484.service - OpenSSH per-connection server daemon (139.178.89.65:40484). Nov 4 05:00:02.822991 sshd[4365]: Accepted publickey for core from 139.178.89.65 port 40484 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:02.824431 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:02.829051 systemd-logind[1579]: New session 24 of user core. Nov 4 05:00:02.835927 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 05:00:03.052037 sshd[4368]: Connection closed by 139.178.89.65 port 40484 Nov 4 05:00:03.053124 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:03.058682 systemd[1]: sshd@23-172.232.15.13:22-139.178.89.65:40484.service: Deactivated successfully. Nov 4 05:00:03.062314 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 05:00:03.063955 systemd-logind[1579]: Session 24 logged out. Waiting for processes to exit. Nov 4 05:00:03.066125 systemd-logind[1579]: Removed session 24. Nov 4 05:00:08.113938 systemd[1]: Started sshd@24-172.232.15.13:22-139.178.89.65:57416.service - OpenSSH per-connection server daemon (139.178.89.65:57416). Nov 4 05:00:08.414057 sshd[4382]: Accepted publickey for core from 139.178.89.65 port 57416 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:08.415421 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:08.420430 systemd-logind[1579]: New session 25 of user core. Nov 4 05:00:08.428100 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 05:00:08.628283 sshd[4385]: Connection closed by 139.178.89.65 port 57416 Nov 4 05:00:08.628889 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:08.634111 systemd-logind[1579]: Session 25 logged out. Waiting for processes to exit. Nov 4 05:00:08.634347 systemd[1]: sshd@24-172.232.15.13:22-139.178.89.65:57416.service: Deactivated successfully. Nov 4 05:00:08.637488 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 05:00:08.639945 systemd-logind[1579]: Removed session 25. Nov 4 05:00:12.854332 kubelet[2780]: E1104 05:00:12.854048 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:13.691155 systemd[1]: Started sshd@25-172.232.15.13:22-139.178.89.65:57426.service - OpenSSH per-connection server daemon (139.178.89.65:57426). Nov 4 05:00:13.990939 sshd[4398]: Accepted publickey for core from 139.178.89.65 port 57426 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:13.992275 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:13.999408 systemd-logind[1579]: New session 26 of user core. Nov 4 05:00:14.004940 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 4 05:00:14.214291 sshd[4403]: Connection closed by 139.178.89.65 port 57426 Nov 4 05:00:14.215241 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:14.223895 systemd[1]: sshd@25-172.232.15.13:22-139.178.89.65:57426.service: Deactivated successfully. Nov 4 05:00:14.227020 systemd[1]: session-26.scope: Deactivated successfully. Nov 4 05:00:14.228600 systemd-logind[1579]: Session 26 logged out. Waiting for processes to exit. Nov 4 05:00:14.230548 systemd-logind[1579]: Removed session 26. Nov 4 05:00:17.854139 kubelet[2780]: E1104 05:00:17.854108 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:18.853974 kubelet[2780]: E1104 05:00:18.853701 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:19.276057 systemd[1]: Started sshd@26-172.232.15.13:22-139.178.89.65:52788.service - OpenSSH per-connection server daemon (139.178.89.65:52788). Nov 4 05:00:19.572333 sshd[4417]: Accepted publickey for core from 139.178.89.65 port 52788 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:19.573930 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:19.578923 systemd-logind[1579]: New session 27 of user core. Nov 4 05:00:19.584950 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 4 05:00:19.792868 sshd[4420]: Connection closed by 139.178.89.65 port 52788 Nov 4 05:00:19.793883 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:19.800324 systemd[1]: sshd@26-172.232.15.13:22-139.178.89.65:52788.service: Deactivated successfully. Nov 4 05:00:19.803347 systemd[1]: session-27.scope: Deactivated successfully. Nov 4 05:00:19.805195 systemd-logind[1579]: Session 27 logged out. Waiting for processes to exit. Nov 4 05:00:19.806745 systemd-logind[1579]: Removed session 27. Nov 4 05:00:24.864617 systemd[1]: Started sshd@27-172.232.15.13:22-139.178.89.65:52790.service - OpenSSH per-connection server daemon (139.178.89.65:52790). Nov 4 05:00:25.171428 sshd[4434]: Accepted publickey for core from 139.178.89.65 port 52790 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:25.174059 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:25.179750 systemd-logind[1579]: New session 28 of user core. Nov 4 05:00:25.187903 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 4 05:00:25.392146 sshd[4437]: Connection closed by 139.178.89.65 port 52790 Nov 4 05:00:25.393457 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:25.398158 systemd-logind[1579]: Session 28 logged out. Waiting for processes to exit. Nov 4 05:00:25.398373 systemd[1]: sshd@27-172.232.15.13:22-139.178.89.65:52790.service: Deactivated successfully. Nov 4 05:00:25.401179 systemd[1]: session-28.scope: Deactivated successfully. Nov 4 05:00:25.403318 systemd-logind[1579]: Removed session 28. Nov 4 05:00:25.454817 systemd[1]: Started sshd@28-172.232.15.13:22-139.178.89.65:52802.service - OpenSSH per-connection server daemon (139.178.89.65:52802). Nov 4 05:00:25.754975 sshd[4449]: Accepted publickey for core from 139.178.89.65 port 52802 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:25.757208 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:25.764113 systemd-logind[1579]: New session 29 of user core. Nov 4 05:00:25.770962 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 4 05:00:26.676991 update_engine[1582]: I20251104 05:00:26.676898 1582 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 4 05:00:26.676991 update_engine[1582]: I20251104 05:00:26.676975 1582 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 4 05:00:26.677601 update_engine[1582]: I20251104 05:00:26.677241 1582 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 4 05:00:26.678130 update_engine[1582]: I20251104 05:00:26.678098 1582 omaha_request_params.cc:62] Current group set to developer Nov 4 05:00:26.678243 update_engine[1582]: I20251104 05:00:26.678217 1582 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 4 05:00:26.678805 update_engine[1582]: I20251104 05:00:26.678311 1582 update_attempter.cc:643] Scheduling an action processor start. Nov 4 05:00:26.678805 update_engine[1582]: I20251104 05:00:26.678339 1582 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 4 05:00:26.678805 update_engine[1582]: I20251104 05:00:26.678381 1582 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 4 05:00:26.678805 update_engine[1582]: I20251104 05:00:26.678440 1582 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 4 05:00:26.678805 update_engine[1582]: I20251104 05:00:26.678449 1582 omaha_request_action.cc:272] Request: Nov 4 05:00:26.678805 update_engine[1582]: Nov 4 05:00:26.678805 update_engine[1582]: Nov 4 05:00:26.678805 update_engine[1582]: Nov 4 05:00:26.678805 update_engine[1582]: Nov 4 05:00:26.678805 update_engine[1582]: Nov 4 05:00:26.678805 update_engine[1582]: Nov 4 05:00:26.678805 update_engine[1582]: Nov 4 05:00:26.678805 update_engine[1582]: Nov 4 05:00:26.678805 update_engine[1582]: I20251104 05:00:26.678457 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 05:00:26.679089 locksmithd[1625]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 4 05:00:26.680030 update_engine[1582]: I20251104 05:00:26.679996 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 05:00:26.680556 update_engine[1582]: I20251104 05:00:26.680518 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 05:00:26.705075 update_engine[1582]: E20251104 05:00:26.705032 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 05:00:26.705132 update_engine[1582]: I20251104 05:00:26.705098 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 4 05:00:27.158574 containerd[1610]: time="2025-11-04T05:00:27.158480354Z" level=info msg="StopContainer for \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" with timeout 30 (s)" Nov 4 05:00:27.161843 containerd[1610]: time="2025-11-04T05:00:27.161636541Z" level=info msg="Stop container \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" with signal terminated" Nov 4 05:00:27.187156 systemd[1]: cri-containerd-3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c.scope: Deactivated successfully. Nov 4 05:00:27.189854 containerd[1610]: time="2025-11-04T05:00:27.189692598Z" level=info msg="received exit event container_id:\"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" id:\"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" pid:3351 exited_at:{seconds:1762232427 nanos:188734998}" Nov 4 05:00:27.190232 containerd[1610]: time="2025-11-04T05:00:27.190175363Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 05:00:27.200142 containerd[1610]: time="2025-11-04T05:00:27.200120089Z" level=info msg="StopContainer for \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" with timeout 2 (s)" Nov 4 05:00:27.200521 containerd[1610]: time="2025-11-04T05:00:27.200501105Z" level=info msg="Stop container \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" with signal terminated" Nov 4 05:00:27.209799 systemd-networkd[1499]: lxc_health: Link DOWN Nov 4 05:00:27.209810 systemd-networkd[1499]: lxc_health: Lost carrier Nov 4 05:00:27.236579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c-rootfs.mount: Deactivated successfully. Nov 4 05:00:27.237312 containerd[1610]: time="2025-11-04T05:00:27.237140843Z" level=info msg="received exit event container_id:\"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" id:\"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" pid:3443 exited_at:{seconds:1762232427 nanos:236950905}" Nov 4 05:00:27.239344 systemd[1]: cri-containerd-33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350.scope: Deactivated successfully. Nov 4 05:00:27.240272 systemd[1]: cri-containerd-33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350.scope: Consumed 6.901s CPU time, 124M memory peak, 136K read from disk, 13.3M written to disk. Nov 4 05:00:27.253564 containerd[1610]: time="2025-11-04T05:00:27.253353444Z" level=info msg="StopContainer for \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" returns successfully" Nov 4 05:00:27.255411 containerd[1610]: time="2025-11-04T05:00:27.255329054Z" level=info msg="StopPodSandbox for \"c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541\"" Nov 4 05:00:27.255607 containerd[1610]: time="2025-11-04T05:00:27.255560411Z" level=info msg="Container to stop \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:00:27.267589 systemd[1]: cri-containerd-c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541.scope: Deactivated successfully. Nov 4 05:00:27.285455 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350-rootfs.mount: Deactivated successfully. Nov 4 05:00:27.295995 containerd[1610]: time="2025-11-04T05:00:27.295933390Z" level=info msg="StopContainer for \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" returns successfully" Nov 4 05:00:27.297024 containerd[1610]: time="2025-11-04T05:00:27.296747782Z" level=info msg="StopPodSandbox for \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\"" Nov 4 05:00:27.297129 containerd[1610]: time="2025-11-04T05:00:27.297110518Z" level=info msg="Container to stop \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:00:27.297229 containerd[1610]: time="2025-11-04T05:00:27.297212397Z" level=info msg="Container to stop \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:00:27.297421 containerd[1610]: time="2025-11-04T05:00:27.297403585Z" level=info msg="Container to stop \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:00:27.297581 containerd[1610]: time="2025-11-04T05:00:27.297535433Z" level=info msg="Container to stop \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:00:27.297682 containerd[1610]: time="2025-11-04T05:00:27.297665122Z" level=info msg="Container to stop \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 05:00:27.313470 systemd[1]: cri-containerd-eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056.scope: Deactivated successfully. Nov 4 05:00:27.342757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541-rootfs.mount: Deactivated successfully. Nov 4 05:00:27.348920 containerd[1610]: time="2025-11-04T05:00:27.348868368Z" level=info msg="shim disconnected" id=c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541 namespace=k8s.io Nov 4 05:00:27.348920 containerd[1610]: time="2025-11-04T05:00:27.348903878Z" level=info msg="cleaning up after shim disconnected" id=c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541 namespace=k8s.io Nov 4 05:00:27.349404 containerd[1610]: time="2025-11-04T05:00:27.348917577Z" level=info msg="cleaning up dead shim" id=c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541 namespace=k8s.io Nov 4 05:00:27.353838 containerd[1610]: time="2025-11-04T05:00:27.352314952Z" level=info msg="received exit event sandbox_id:\"c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541\" exit_status:137 exited_at:{seconds:1762232427 nanos:273159288}" Nov 4 05:00:27.353414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541-shm.mount: Deactivated successfully. Nov 4 05:00:27.354572 containerd[1610]: time="2025-11-04T05:00:27.354553189Z" level=info msg="TearDown network for sandbox \"c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541\" successfully" Nov 4 05:00:27.354749 containerd[1610]: time="2025-11-04T05:00:27.354732687Z" level=info msg="StopPodSandbox for \"c99244716fec1e02b765747dd00cb9130a45294137b4eb561038a9a0016f1541\" returns successfully" Nov 4 05:00:27.365923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056-rootfs.mount: Deactivated successfully. Nov 4 05:00:27.368445 containerd[1610]: time="2025-11-04T05:00:27.368425004Z" level=info msg="received exit event sandbox_id:\"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" exit_status:137 exited_at:{seconds:1762232427 nanos:319311516}" Nov 4 05:00:27.373303 containerd[1610]: time="2025-11-04T05:00:27.368921029Z" level=info msg="shim disconnected" id=eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056 namespace=k8s.io Nov 4 05:00:27.373303 containerd[1610]: time="2025-11-04T05:00:27.373163185Z" level=info msg="cleaning up after shim disconnected" id=eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056 namespace=k8s.io Nov 4 05:00:27.373303 containerd[1610]: time="2025-11-04T05:00:27.373171714Z" level=info msg="cleaning up dead shim" id=eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056 namespace=k8s.io Nov 4 05:00:27.374153 containerd[1610]: time="2025-11-04T05:00:27.371389683Z" level=info msg="TearDown network for sandbox \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" successfully" Nov 4 05:00:27.374153 containerd[1610]: time="2025-11-04T05:00:27.374047235Z" level=info msg="StopPodSandbox for \"eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056\" returns successfully" Nov 4 05:00:27.441212 kubelet[2780]: I1104 05:00:27.440264 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n94wk\" (UniqueName: \"kubernetes.io/projected/bc821d30-98ae-4341-9591-4068a1937a63-kube-api-access-n94wk\") pod \"bc821d30-98ae-4341-9591-4068a1937a63\" (UID: \"bc821d30-98ae-4341-9591-4068a1937a63\") " Nov 4 05:00:27.441212 kubelet[2780]: I1104 05:00:27.440311 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc821d30-98ae-4341-9591-4068a1937a63-cilium-config-path\") pod \"bc821d30-98ae-4341-9591-4068a1937a63\" (UID: \"bc821d30-98ae-4341-9591-4068a1937a63\") " Nov 4 05:00:27.444279 kubelet[2780]: I1104 05:00:27.444244 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc821d30-98ae-4341-9591-4068a1937a63-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bc821d30-98ae-4341-9591-4068a1937a63" (UID: "bc821d30-98ae-4341-9591-4068a1937a63"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 05:00:27.445694 kubelet[2780]: I1104 05:00:27.445648 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc821d30-98ae-4341-9591-4068a1937a63-kube-api-access-n94wk" (OuterVolumeSpecName: "kube-api-access-n94wk") pod "bc821d30-98ae-4341-9591-4068a1937a63" (UID: "bc821d30-98ae-4341-9591-4068a1937a63"). InnerVolumeSpecName "kube-api-access-n94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 05:00:27.473408 kubelet[2780]: I1104 05:00:27.472410 2780 scope.go:117] "RemoveContainer" containerID="33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350" Nov 4 05:00:27.475378 containerd[1610]: time="2025-11-04T05:00:27.475322159Z" level=info msg="RemoveContainer for \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\"" Nov 4 05:00:27.495493 containerd[1610]: time="2025-11-04T05:00:27.495446419Z" level=info msg="RemoveContainer for \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" returns successfully" Nov 4 05:00:27.495984 systemd[1]: Removed slice kubepods-besteffort-podbc821d30_98ae_4341_9591_4068a1937a63.slice - libcontainer container kubepods-besteffort-podbc821d30_98ae_4341_9591_4068a1937a63.slice. Nov 4 05:00:27.496797 kubelet[2780]: I1104 05:00:27.496099 2780 scope.go:117] "RemoveContainer" containerID="1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333" Nov 4 05:00:27.497630 containerd[1610]: time="2025-11-04T05:00:27.497578167Z" level=info msg="RemoveContainer for \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\"" Nov 4 05:00:27.503012 containerd[1610]: time="2025-11-04T05:00:27.502987531Z" level=info msg="RemoveContainer for \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\" returns successfully" Nov 4 05:00:27.503129 kubelet[2780]: I1104 05:00:27.503097 2780 scope.go:117] "RemoveContainer" containerID="d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df" Nov 4 05:00:27.505639 containerd[1610]: time="2025-11-04T05:00:27.505585524Z" level=info msg="RemoveContainer for \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\"" Nov 4 05:00:27.510416 containerd[1610]: time="2025-11-04T05:00:27.510385233Z" level=info msg="RemoveContainer for \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\" returns successfully" Nov 4 05:00:27.510990 kubelet[2780]: I1104 05:00:27.510676 2780 scope.go:117] "RemoveContainer" containerID="799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4" Nov 4 05:00:27.513606 containerd[1610]: time="2025-11-04T05:00:27.513574300Z" level=info msg="RemoveContainer for \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\"" Nov 4 05:00:27.516347 containerd[1610]: time="2025-11-04T05:00:27.516316092Z" level=info msg="RemoveContainer for \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\" returns successfully" Nov 4 05:00:27.516519 kubelet[2780]: I1104 05:00:27.516446 2780 scope.go:117] "RemoveContainer" containerID="90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e" Nov 4 05:00:27.517826 containerd[1610]: time="2025-11-04T05:00:27.517801746Z" level=info msg="RemoveContainer for \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\"" Nov 4 05:00:27.521287 containerd[1610]: time="2025-11-04T05:00:27.521235660Z" level=info msg="RemoveContainer for \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\" returns successfully" Nov 4 05:00:27.521587 kubelet[2780]: I1104 05:00:27.521522 2780 scope.go:117] "RemoveContainer" containerID="33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350" Nov 4 05:00:27.523090 containerd[1610]: time="2025-11-04T05:00:27.523043981Z" level=error msg="ContainerStatus for \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\": not found" Nov 4 05:00:27.523384 kubelet[2780]: E1104 05:00:27.523325 2780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\": not found" containerID="33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350" Nov 4 05:00:27.523572 kubelet[2780]: I1104 05:00:27.523497 2780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350"} err="failed to get container status \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\": rpc error: code = NotFound desc = an error occurred when try to find container \"33288dbd10a7e22bc62ed4963da6aac86f14521bcbdd6b298f3869966430a350\": not found" Nov 4 05:00:27.523737 kubelet[2780]: I1104 05:00:27.523661 2780 scope.go:117] "RemoveContainer" containerID="1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333" Nov 4 05:00:27.524489 containerd[1610]: time="2025-11-04T05:00:27.524321228Z" level=error msg="ContainerStatus for \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\": not found" Nov 4 05:00:27.525263 kubelet[2780]: E1104 05:00:27.525157 2780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\": not found" containerID="1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333" Nov 4 05:00:27.525263 kubelet[2780]: I1104 05:00:27.525175 2780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333"} err="failed to get container status \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d33df889d9af12a51edc89f7fb5fda4ecce288567126d19cc42ffafefdea333\": not found" Nov 4 05:00:27.525263 kubelet[2780]: I1104 05:00:27.525188 2780 scope.go:117] "RemoveContainer" containerID="d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df" Nov 4 05:00:27.525470 containerd[1610]: time="2025-11-04T05:00:27.525436057Z" level=error msg="ContainerStatus for \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\": not found" Nov 4 05:00:27.525682 kubelet[2780]: E1104 05:00:27.525647 2780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\": not found" containerID="d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df" Nov 4 05:00:27.525682 kubelet[2780]: I1104 05:00:27.525670 2780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df"} err="failed to get container status \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\": rpc error: code = NotFound desc = an error occurred when try to find container \"d90a22ad077e3e7205b05503b3a55728c206981245f81294cff17d1f03c246df\": not found" Nov 4 05:00:27.525682 kubelet[2780]: I1104 05:00:27.525682 2780 scope.go:117] "RemoveContainer" containerID="799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4" Nov 4 05:00:27.526227 kubelet[2780]: E1104 05:00:27.526211 2780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\": not found" containerID="799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4" Nov 4 05:00:27.526311 containerd[1610]: time="2025-11-04T05:00:27.526105930Z" level=error msg="ContainerStatus for \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\": not found" Nov 4 05:00:27.526356 kubelet[2780]: I1104 05:00:27.526226 2780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4"} err="failed to get container status \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\": rpc error: code = NotFound desc = an error occurred when try to find container \"799b232dddf29739b59711094fe6a505cfd2bad13124f3d0e003670579cb5db4\": not found" Nov 4 05:00:27.526356 kubelet[2780]: I1104 05:00:27.526238 2780 scope.go:117] "RemoveContainer" containerID="90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e" Nov 4 05:00:27.526487 containerd[1610]: time="2025-11-04T05:00:27.526468526Z" level=error msg="ContainerStatus for \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\": not found" Nov 4 05:00:27.526647 kubelet[2780]: E1104 05:00:27.526612 2780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\": not found" containerID="90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e" Nov 4 05:00:27.526690 kubelet[2780]: I1104 05:00:27.526666 2780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e"} err="failed to get container status \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\": rpc error: code = NotFound desc = an error occurred when try to find container \"90832ef4f5f2f799da280a70b02281855256bef51890a0e980b8ccef2e8e444e\": not found" Nov 4 05:00:27.526690 kubelet[2780]: I1104 05:00:27.526681 2780 scope.go:117] "RemoveContainer" containerID="3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c" Nov 4 05:00:27.527936 containerd[1610]: time="2025-11-04T05:00:27.527912921Z" level=info msg="RemoveContainer for \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\"" Nov 4 05:00:27.530809 containerd[1610]: time="2025-11-04T05:00:27.530752601Z" level=info msg="RemoveContainer for \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" returns successfully" Nov 4 05:00:27.530992 kubelet[2780]: I1104 05:00:27.530945 2780 scope.go:117] "RemoveContainer" containerID="3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c" Nov 4 05:00:27.531364 containerd[1610]: time="2025-11-04T05:00:27.531258086Z" level=error msg="ContainerStatus for \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\": not found" Nov 4 05:00:27.531692 kubelet[2780]: E1104 05:00:27.531625 2780 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\": not found" containerID="3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c" Nov 4 05:00:27.531692 kubelet[2780]: I1104 05:00:27.531673 2780 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c"} err="failed to get container status \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\": rpc error: code = NotFound desc = an error occurred when try to find container \"3bcc0325ca3cca0dfce1c87545355501a1895d457827dd8de6dc7b792658034c\": not found" Nov 4 05:00:27.541153 kubelet[2780]: I1104 05:00:27.541119 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-clustermesh-secrets\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.541153 kubelet[2780]: I1104 05:00:27.541160 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-host-proc-sys-net\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.541153 kubelet[2780]: I1104 05:00:27.541178 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-cgroup\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.541988 kubelet[2780]: I1104 05:00:27.541194 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cni-path\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.541988 kubelet[2780]: I1104 05:00:27.541214 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-host-proc-sys-kernel\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.541988 kubelet[2780]: I1104 05:00:27.541241 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-run\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.541988 kubelet[2780]: I1104 05:00:27.541258 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-etc-cni-netd\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.541988 kubelet[2780]: I1104 05:00:27.541272 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-bpf-maps\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.541988 kubelet[2780]: I1104 05:00:27.541288 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-lib-modules\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.542237 kubelet[2780]: I1104 05:00:27.541299 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.542237 kubelet[2780]: I1104 05:00:27.541306 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-hubble-tls\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.542237 kubelet[2780]: I1104 05:00:27.541367 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-config-path\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.542237 kubelet[2780]: I1104 05:00:27.541395 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz9hf\" (UniqueName: \"kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-kube-api-access-rz9hf\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.542237 kubelet[2780]: I1104 05:00:27.541411 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-hostproc\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.542237 kubelet[2780]: I1104 05:00:27.541429 2780 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-xtables-lock\") pod \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\" (UID: \"04eb90a0-0e18-402d-a5bb-5cde9e44fd0c\") " Nov 4 05:00:27.542374 kubelet[2780]: I1104 05:00:27.541476 2780 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-cgroup\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.542374 kubelet[2780]: I1104 05:00:27.541487 2780 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n94wk\" (UniqueName: \"kubernetes.io/projected/bc821d30-98ae-4341-9591-4068a1937a63-kube-api-access-n94wk\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.542374 kubelet[2780]: I1104 05:00:27.541498 2780 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc821d30-98ae-4341-9591-4068a1937a63-cilium-config-path\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.542374 kubelet[2780]: I1104 05:00:27.541517 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.542804 kubelet[2780]: I1104 05:00:27.542765 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cni-path" (OuterVolumeSpecName: "cni-path") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.542909 kubelet[2780]: I1104 05:00:27.542893 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.542983 kubelet[2780]: I1104 05:00:27.542970 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.543940 kubelet[2780]: I1104 05:00:27.543046 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.543940 kubelet[2780]: I1104 05:00:27.543065 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.543940 kubelet[2780]: I1104 05:00:27.543078 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.544856 kubelet[2780]: I1104 05:00:27.544818 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.545483 kubelet[2780]: I1104 05:00:27.545444 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-hostproc" (OuterVolumeSpecName: "hostproc") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 05:00:27.547355 kubelet[2780]: I1104 05:00:27.547313 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 05:00:27.551031 kubelet[2780]: I1104 05:00:27.551003 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 05:00:27.551347 kubelet[2780]: I1104 05:00:27.551321 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 05:00:27.551617 kubelet[2780]: I1104 05:00:27.551573 2780 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-kube-api-access-rz9hf" (OuterVolumeSpecName: "kube-api-access-rz9hf") pod "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" (UID: "04eb90a0-0e18-402d-a5bb-5cde9e44fd0c"). InnerVolumeSpecName "kube-api-access-rz9hf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 05:00:27.642100 kubelet[2780]: I1104 05:00:27.642064 2780 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-lib-modules\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642100 kubelet[2780]: I1104 05:00:27.642086 2780 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-hubble-tls\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642100 kubelet[2780]: I1104 05:00:27.642097 2780 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-config-path\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642100 kubelet[2780]: I1104 05:00:27.642109 2780 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rz9hf\" (UniqueName: \"kubernetes.io/projected/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-kube-api-access-rz9hf\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642100 kubelet[2780]: I1104 05:00:27.642119 2780 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-hostproc\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642367 kubelet[2780]: I1104 05:00:27.642129 2780 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-xtables-lock\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642367 kubelet[2780]: I1104 05:00:27.642139 2780 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-clustermesh-secrets\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642367 kubelet[2780]: I1104 05:00:27.642147 2780 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-host-proc-sys-net\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642367 kubelet[2780]: I1104 05:00:27.642156 2780 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cni-path\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642367 kubelet[2780]: I1104 05:00:27.642166 2780 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-host-proc-sys-kernel\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642367 kubelet[2780]: I1104 05:00:27.642177 2780 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-cilium-run\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642367 kubelet[2780]: I1104 05:00:27.642186 2780 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-etc-cni-netd\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.642367 kubelet[2780]: I1104 05:00:27.642195 2780 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c-bpf-maps\") on node \"172-232-15-13\" DevicePath \"\"" Nov 4 05:00:27.780870 systemd[1]: Removed slice kubepods-burstable-pod04eb90a0_0e18_402d_a5bb_5cde9e44fd0c.slice - libcontainer container kubepods-burstable-pod04eb90a0_0e18_402d_a5bb_5cde9e44fd0c.slice. Nov 4 05:00:27.781102 systemd[1]: kubepods-burstable-pod04eb90a0_0e18_402d_a5bb_5cde9e44fd0c.slice: Consumed 7.023s CPU time, 124.4M memory peak, 136K read from disk, 13.3M written to disk. Nov 4 05:00:28.232588 systemd[1]: var-lib-kubelet-pods-bc821d30\x2d98ae\x2d4341\x2d9591\x2d4068a1937a63-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn94wk.mount: Deactivated successfully. Nov 4 05:00:28.232697 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eef9f5514efec71394595f32272a1bd58cfad083163ff232cb9dd52e159e7056-shm.mount: Deactivated successfully. Nov 4 05:00:28.232804 systemd[1]: var-lib-kubelet-pods-04eb90a0\x2d0e18\x2d402d\x2da5bb\x2d5cde9e44fd0c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drz9hf.mount: Deactivated successfully. Nov 4 05:00:28.232897 systemd[1]: var-lib-kubelet-pods-04eb90a0\x2d0e18\x2d402d\x2da5bb\x2d5cde9e44fd0c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 4 05:00:28.232972 systemd[1]: var-lib-kubelet-pods-04eb90a0\x2d0e18\x2d402d\x2da5bb\x2d5cde9e44fd0c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 4 05:00:28.856043 kubelet[2780]: I1104 05:00:28.855998 2780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04eb90a0-0e18-402d-a5bb-5cde9e44fd0c" path="/var/lib/kubelet/pods/04eb90a0-0e18-402d-a5bb-5cde9e44fd0c/volumes" Nov 4 05:00:28.857231 kubelet[2780]: I1104 05:00:28.857201 2780 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc821d30-98ae-4341-9591-4068a1937a63" path="/var/lib/kubelet/pods/bc821d30-98ae-4341-9591-4068a1937a63/volumes" Nov 4 05:00:29.161257 sshd[4452]: Connection closed by 139.178.89.65 port 52802 Nov 4 05:00:29.162213 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:29.167075 systemd-logind[1579]: Session 29 logged out. Waiting for processes to exit. Nov 4 05:00:29.168082 systemd[1]: sshd@28-172.232.15.13:22-139.178.89.65:52802.service: Deactivated successfully. Nov 4 05:00:29.170386 systemd[1]: session-29.scope: Deactivated successfully. Nov 4 05:00:29.172475 systemd-logind[1579]: Removed session 29. Nov 4 05:00:29.222889 systemd[1]: Started sshd@29-172.232.15.13:22-139.178.89.65:57622.service - OpenSSH per-connection server daemon (139.178.89.65:57622). Nov 4 05:00:29.524316 sshd[4600]: Accepted publickey for core from 139.178.89.65 port 57622 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:29.525956 sshd-session[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:29.535647 systemd-logind[1579]: New session 30 of user core. Nov 4 05:00:29.546027 systemd[1]: Started session-30.scope - Session 30 of User core. Nov 4 05:00:30.010086 kubelet[2780]: E1104 05:00:30.009977 2780 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 05:00:30.185425 systemd[1]: Created slice kubepods-burstable-poda4dbf17c_c7b4_4841_8ce4_ac6f06a61081.slice - libcontainer container kubepods-burstable-poda4dbf17c_c7b4_4841_8ce4_ac6f06a61081.slice. Nov 4 05:00:30.203111 sshd[4603]: Connection closed by 139.178.89.65 port 57622 Nov 4 05:00:30.205045 sshd-session[4600]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:30.212240 systemd-logind[1579]: Session 30 logged out. Waiting for processes to exit. Nov 4 05:00:30.213187 systemd[1]: sshd@29-172.232.15.13:22-139.178.89.65:57622.service: Deactivated successfully. Nov 4 05:00:30.216590 systemd[1]: session-30.scope: Deactivated successfully. Nov 4 05:00:30.221209 systemd-logind[1579]: Removed session 30. Nov 4 05:00:30.258982 kubelet[2780]: I1104 05:00:30.258925 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-xtables-lock\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.258982 kubelet[2780]: I1104 05:00:30.258967 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-cilium-ipsec-secrets\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.258982 kubelet[2780]: I1104 05:00:30.258995 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-cilium-run\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259174 kubelet[2780]: I1104 05:00:30.259012 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-clustermesh-secrets\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259174 kubelet[2780]: I1104 05:00:30.259028 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-hubble-tls\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259174 kubelet[2780]: I1104 05:00:30.259043 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-bpf-maps\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259174 kubelet[2780]: I1104 05:00:30.259069 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-host-proc-sys-kernel\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259174 kubelet[2780]: I1104 05:00:30.259083 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnzn\" (UniqueName: \"kubernetes.io/projected/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-kube-api-access-ffnzn\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259174 kubelet[2780]: I1104 05:00:30.259096 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-hostproc\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259311 kubelet[2780]: I1104 05:00:30.259107 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-cni-path\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259311 kubelet[2780]: I1104 05:00:30.259119 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-lib-modules\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259311 kubelet[2780]: I1104 05:00:30.259131 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-cilium-cgroup\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259311 kubelet[2780]: I1104 05:00:30.259142 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-etc-cni-netd\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259311 kubelet[2780]: I1104 05:00:30.259154 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-host-proc-sys-net\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.259311 kubelet[2780]: I1104 05:00:30.259172 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4dbf17c-c7b4-4841-8ce4-ac6f06a61081-cilium-config-path\") pod \"cilium-lmpcn\" (UID: \"a4dbf17c-c7b4-4841-8ce4-ac6f06a61081\") " pod="kube-system/cilium-lmpcn" Nov 4 05:00:30.263095 systemd[1]: Started sshd@30-172.232.15.13:22-139.178.89.65:57638.service - OpenSSH per-connection server daemon (139.178.89.65:57638). Nov 4 05:00:30.494118 kubelet[2780]: E1104 05:00:30.494064 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:30.495305 containerd[1610]: time="2025-11-04T05:00:30.495234377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmpcn,Uid:a4dbf17c-c7b4-4841-8ce4-ac6f06a61081,Namespace:kube-system,Attempt:0,}" Nov 4 05:00:30.523703 containerd[1610]: time="2025-11-04T05:00:30.523395711Z" level=info msg="connecting to shim 4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c" address="unix:///run/containerd/s/b4364b86e60abc5212a8c0e59c597f44dc7b41b3281b57cbbfb773f6222dc192" namespace=k8s.io protocol=ttrpc version=3 Nov 4 05:00:30.549929 systemd[1]: Started cri-containerd-4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c.scope - libcontainer container 4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c. Nov 4 05:00:30.561942 sshd[4613]: Accepted publickey for core from 139.178.89.65 port 57638 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:30.565417 sshd-session[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:30.574004 systemd-logind[1579]: New session 31 of user core. Nov 4 05:00:30.581900 systemd[1]: Started session-31.scope - Session 31 of User core. Nov 4 05:00:30.592585 containerd[1610]: time="2025-11-04T05:00:30.592554584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmpcn,Uid:a4dbf17c-c7b4-4841-8ce4-ac6f06a61081,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\"" Nov 4 05:00:30.594024 kubelet[2780]: E1104 05:00:30.594003 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:30.600746 containerd[1610]: time="2025-11-04T05:00:30.600475727Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 05:00:30.608240 containerd[1610]: time="2025-11-04T05:00:30.608198201Z" level=info msg="Container fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:00:30.613359 containerd[1610]: time="2025-11-04T05:00:30.613322611Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971\"" Nov 4 05:00:30.614678 containerd[1610]: time="2025-11-04T05:00:30.614222603Z" level=info msg="StartContainer for \"fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971\"" Nov 4 05:00:30.615291 containerd[1610]: time="2025-11-04T05:00:30.615246343Z" level=info msg="connecting to shim fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971" address="unix:///run/containerd/s/b4364b86e60abc5212a8c0e59c597f44dc7b41b3281b57cbbfb773f6222dc192" protocol=ttrpc version=3 Nov 4 05:00:30.637926 systemd[1]: Started cri-containerd-fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971.scope - libcontainer container fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971. Nov 4 05:00:30.677890 containerd[1610]: time="2025-11-04T05:00:30.677094267Z" level=info msg="StartContainer for \"fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971\" returns successfully" Nov 4 05:00:30.692090 systemd[1]: cri-containerd-fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971.scope: Deactivated successfully. Nov 4 05:00:30.694449 containerd[1610]: time="2025-11-04T05:00:30.694148011Z" level=info msg="received exit event container_id:\"fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971\" id:\"fb4b6a138af997d3f9456dfedd0eeb20b130130b8d14ae44c2c0be4500b5c971\" pid:4680 exited_at:{seconds:1762232430 nanos:693472327}" Nov 4 05:00:30.716193 sshd[4666]: Connection closed by 139.178.89.65 port 57638 Nov 4 05:00:30.716085 sshd-session[4613]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:30.723853 systemd[1]: sshd@30-172.232.15.13:22-139.178.89.65:57638.service: Deactivated successfully. Nov 4 05:00:30.728136 systemd[1]: session-31.scope: Deactivated successfully. Nov 4 05:00:30.731105 systemd-logind[1579]: Session 31 logged out. Waiting for processes to exit. Nov 4 05:00:30.733835 systemd-logind[1579]: Removed session 31. Nov 4 05:00:30.782099 systemd[1]: Started sshd@31-172.232.15.13:22-139.178.89.65:57654.service - OpenSSH per-connection server daemon (139.178.89.65:57654). Nov 4 05:00:31.085688 sshd[4717]: Accepted publickey for core from 139.178.89.65 port 57654 ssh2: RSA SHA256:czUaYLI8d1p6CnLaFADA3Sdie0qlY3MZ41jILb/UGTY Nov 4 05:00:31.087931 sshd-session[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 05:00:31.095548 systemd-logind[1579]: New session 32 of user core. Nov 4 05:00:31.100958 systemd[1]: Started session-32.scope - Session 32 of User core. Nov 4 05:00:31.495693 kubelet[2780]: E1104 05:00:31.495569 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:31.501262 containerd[1610]: time="2025-11-04T05:00:31.501038932Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 05:00:31.513390 containerd[1610]: time="2025-11-04T05:00:31.513266394Z" level=info msg="Container b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:00:31.522031 containerd[1610]: time="2025-11-04T05:00:31.521995511Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04\"" Nov 4 05:00:31.523155 containerd[1610]: time="2025-11-04T05:00:31.523119520Z" level=info msg="StartContainer for \"b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04\"" Nov 4 05:00:31.525127 containerd[1610]: time="2025-11-04T05:00:31.525098721Z" level=info msg="connecting to shim b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04" address="unix:///run/containerd/s/b4364b86e60abc5212a8c0e59c597f44dc7b41b3281b57cbbfb773f6222dc192" protocol=ttrpc version=3 Nov 4 05:00:31.557918 systemd[1]: Started cri-containerd-b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04.scope - libcontainer container b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04. Nov 4 05:00:31.602955 containerd[1610]: time="2025-11-04T05:00:31.602887706Z" level=info msg="StartContainer for \"b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04\" returns successfully" Nov 4 05:00:31.613305 systemd[1]: cri-containerd-b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04.scope: Deactivated successfully. Nov 4 05:00:31.614144 containerd[1610]: time="2025-11-04T05:00:31.614061369Z" level=info msg="received exit event container_id:\"b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04\" id:\"b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04\" pid:4741 exited_at:{seconds:1762232431 nanos:613480825}" Nov 4 05:00:31.646747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b383ed9473063fcbd014c1b360dc869f392a0bb60e49134b9103f293387e7f04-rootfs.mount: Deactivated successfully. Nov 4 05:00:32.501595 kubelet[2780]: E1104 05:00:32.501547 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:32.509501 containerd[1610]: time="2025-11-04T05:00:32.508666380Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 05:00:32.527934 containerd[1610]: time="2025-11-04T05:00:32.527884660Z" level=info msg="Container 6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:00:32.532709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount151856391.mount: Deactivated successfully. Nov 4 05:00:32.537298 containerd[1610]: time="2025-11-04T05:00:32.537269962Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a\"" Nov 4 05:00:32.538870 containerd[1610]: time="2025-11-04T05:00:32.538844017Z" level=info msg="StartContainer for \"6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a\"" Nov 4 05:00:32.540089 containerd[1610]: time="2025-11-04T05:00:32.540060406Z" level=info msg="connecting to shim 6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a" address="unix:///run/containerd/s/b4364b86e60abc5212a8c0e59c597f44dc7b41b3281b57cbbfb773f6222dc192" protocol=ttrpc version=3 Nov 4 05:00:32.566913 systemd[1]: Started cri-containerd-6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a.scope - libcontainer container 6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a. Nov 4 05:00:32.619197 containerd[1610]: time="2025-11-04T05:00:32.619050196Z" level=info msg="StartContainer for \"6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a\" returns successfully" Nov 4 05:00:32.622513 systemd[1]: cri-containerd-6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a.scope: Deactivated successfully. Nov 4 05:00:32.623137 containerd[1610]: time="2025-11-04T05:00:32.623106958Z" level=info msg="received exit event container_id:\"6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a\" id:\"6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a\" pid:4787 exited_at:{seconds:1762232432 nanos:622850940}" Nov 4 05:00:32.644747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fdfb2e16fd4b11ca386841cf5f75ff220f03a81d0314f8b3d53c02eebab219a-rootfs.mount: Deactivated successfully. Nov 4 05:00:33.511103 kubelet[2780]: E1104 05:00:33.510806 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:33.515499 containerd[1610]: time="2025-11-04T05:00:33.515065387Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 05:00:33.530073 containerd[1610]: time="2025-11-04T05:00:33.530003910Z" level=info msg="Container 801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:00:33.560603 containerd[1610]: time="2025-11-04T05:00:33.560540440Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7\"" Nov 4 05:00:33.565113 containerd[1610]: time="2025-11-04T05:00:33.565081879Z" level=info msg="StartContainer for \"801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7\"" Nov 4 05:00:33.566121 containerd[1610]: time="2025-11-04T05:00:33.566090329Z" level=info msg="connecting to shim 801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7" address="unix:///run/containerd/s/b4364b86e60abc5212a8c0e59c597f44dc7b41b3281b57cbbfb773f6222dc192" protocol=ttrpc version=3 Nov 4 05:00:33.610930 systemd[1]: Started cri-containerd-801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7.scope - libcontainer container 801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7. Nov 4 05:00:33.645557 systemd[1]: cri-containerd-801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7.scope: Deactivated successfully. Nov 4 05:00:33.648989 containerd[1610]: time="2025-11-04T05:00:33.648929690Z" level=info msg="received exit event container_id:\"801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7\" id:\"801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7\" pid:4826 exited_at:{seconds:1762232433 nanos:647497313}" Nov 4 05:00:33.660403 containerd[1610]: time="2025-11-04T05:00:33.660371835Z" level=info msg="StartContainer for \"801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7\" returns successfully" Nov 4 05:00:33.676664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-801fdff7173e5eea121f2199c74ea629b797076f30ab27b6b7fb41035d044fa7-rootfs.mount: Deactivated successfully. Nov 4 05:00:34.523806 kubelet[2780]: E1104 05:00:34.523620 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:34.531292 containerd[1610]: time="2025-11-04T05:00:34.531224760Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 05:00:34.551944 containerd[1610]: time="2025-11-04T05:00:34.551642827Z" level=info msg="Container bbac311fe2ea22328bb2d68846b9f69bbea308162a606198887b45a602fbf928: CDI devices from CRI Config.CDIDevices: []" Nov 4 05:00:34.564207 containerd[1610]: time="2025-11-04T05:00:34.564142695Z" level=info msg="CreateContainer within sandbox \"4ce355ab8b9b7425d26086a08a5a37f89b2c6c571ca78037fd031c88edd3469c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bbac311fe2ea22328bb2d68846b9f69bbea308162a606198887b45a602fbf928\"" Nov 4 05:00:34.566813 containerd[1610]: time="2025-11-04T05:00:34.565871739Z" level=info msg="StartContainer for \"bbac311fe2ea22328bb2d68846b9f69bbea308162a606198887b45a602fbf928\"" Nov 4 05:00:34.567082 containerd[1610]: time="2025-11-04T05:00:34.567060458Z" level=info msg="connecting to shim bbac311fe2ea22328bb2d68846b9f69bbea308162a606198887b45a602fbf928" address="unix:///run/containerd/s/b4364b86e60abc5212a8c0e59c597f44dc7b41b3281b57cbbfb773f6222dc192" protocol=ttrpc version=3 Nov 4 05:00:34.596153 systemd[1]: Started cri-containerd-bbac311fe2ea22328bb2d68846b9f69bbea308162a606198887b45a602fbf928.scope - libcontainer container bbac311fe2ea22328bb2d68846b9f69bbea308162a606198887b45a602fbf928. Nov 4 05:00:34.651055 containerd[1610]: time="2025-11-04T05:00:34.650981496Z" level=info msg="StartContainer for \"bbac311fe2ea22328bb2d68846b9f69bbea308162a606198887b45a602fbf928\" returns successfully" Nov 4 05:00:35.255882 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 4 05:00:35.540608 kubelet[2780]: E1104 05:00:35.539764 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:35.556310 kubelet[2780]: I1104 05:00:35.556238 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmpcn" podStartSLOduration=5.55621912 podStartE2EDuration="5.55621912s" podCreationTimestamp="2025-11-04 05:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 05:00:35.554934331 +0000 UTC m=+260.840953898" watchObservedRunningTime="2025-11-04 05:00:35.55621912 +0000 UTC m=+260.842238687" Nov 4 05:00:36.542502 kubelet[2780]: E1104 05:00:36.542447 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:36.678047 update_engine[1582]: I20251104 05:00:36.677908 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 05:00:36.678047 update_engine[1582]: I20251104 05:00:36.678033 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 05:00:36.678762 update_engine[1582]: I20251104 05:00:36.678445 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 05:00:36.680467 update_engine[1582]: E20251104 05:00:36.680408 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 05:00:36.680745 update_engine[1582]: I20251104 05:00:36.680526 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 4 05:00:38.482739 systemd-networkd[1499]: lxc_health: Link UP Nov 4 05:00:38.493932 systemd-networkd[1499]: lxc_health: Gained carrier Nov 4 05:00:38.498959 kubelet[2780]: E1104 05:00:38.498616 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:38.551433 kubelet[2780]: E1104 05:00:38.551403 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:39.555120 kubelet[2780]: E1104 05:00:39.554464 2780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Nov 4 05:00:40.381188 systemd-networkd[1499]: lxc_health: Gained IPv6LL Nov 4 05:00:45.022977 kubelet[2780]: E1104 05:00:45.022937 2780 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42518->127.0.0.1:44379: write tcp 127.0.0.1:42518->127.0.0.1:44379: write: broken pipe Nov 4 05:00:46.679163 update_engine[1582]: I20251104 05:00:46.678962 1582 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 05:00:46.679163 update_engine[1582]: I20251104 05:00:46.679104 1582 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 05:00:46.680096 update_engine[1582]: I20251104 05:00:46.679598 1582 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 05:00:46.680445 update_engine[1582]: E20251104 05:00:46.680387 1582 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 05:00:46.680493 update_engine[1582]: I20251104 05:00:46.680454 1582 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 4 05:00:48.165432 sshd[4720]: Connection closed by 139.178.89.65 port 57654 Nov 4 05:00:48.166368 sshd-session[4717]: pam_unix(sshd:session): session closed for user core Nov 4 05:00:48.171447 systemd[1]: sshd@31-172.232.15.13:22-139.178.89.65:57654.service: Deactivated successfully. Nov 4 05:00:48.173966 systemd[1]: session-32.scope: Deactivated successfully. Nov 4 05:00:48.175361 systemd-logind[1579]: Session 32 logged out. Waiting for processes to exit. Nov 4 05:00:48.177292 systemd-logind[1579]: Removed session 32.