Nov 5 15:59:25.282229 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:59:25.282258 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:59:25.282267 kernel: BIOS-provided physical RAM map: Nov 5 15:59:25.282274 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 5 15:59:25.282280 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 5 15:59:25.282288 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 15:59:25.282296 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 5 15:59:25.282302 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 5 15:59:25.282308 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 5 15:59:25.282314 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 5 15:59:25.282321 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:59:25.282327 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 15:59:25.282333 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 5 15:59:25.282341 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:59:25.282349 kernel: NX (Execute Disable) protection: active Nov 5 15:59:25.282356 kernel: APIC: Static calls initialized Nov 5 15:59:25.282362 kernel: SMBIOS 2.8 present. Nov 5 15:59:25.282369 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 5 15:59:25.282595 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:59:25.282609 kernel: Hypervisor detected: KVM Nov 5 15:59:25.282616 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 5 15:59:25.282623 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:59:25.282630 kernel: kvm-clock: using sched offset of 6160388850 cycles Nov 5 15:59:25.282638 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:59:25.282645 kernel: tsc: Detected 2000.000 MHz processor Nov 5 15:59:25.282652 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:59:25.282660 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:59:25.282671 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 5 15:59:25.282679 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 15:59:25.282686 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:59:25.282693 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 5 15:59:25.282700 kernel: Using GB pages for direct mapping Nov 5 15:59:25.282707 kernel: ACPI: Early table checksum verification disabled Nov 5 15:59:25.282714 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 5 15:59:25.282723 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:59:25.282730 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:59:25.282738 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:59:25.282745 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 5 15:59:25.282752 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:59:25.282759 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:59:25.282771 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:59:25.282778 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:59:25.282786 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 5 15:59:25.282793 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 5 15:59:25.282801 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 5 15:59:25.282811 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 5 15:59:25.282818 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 5 15:59:25.282825 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 5 15:59:25.282833 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 5 15:59:25.283102 kernel: No NUMA configuration found Nov 5 15:59:25.283113 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 5 15:59:25.283122 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Nov 5 15:59:25.283129 kernel: Zone ranges: Nov 5 15:59:25.283140 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:59:25.283148 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 5 15:59:25.283155 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 5 15:59:25.283162 kernel: Device empty Nov 5 15:59:25.283169 kernel: Movable zone start for each node Nov 5 15:59:25.283176 kernel: Early memory node ranges Nov 5 15:59:25.283184 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 15:59:25.283193 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 5 15:59:25.283200 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 5 15:59:25.283207 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 5 15:59:25.283214 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:59:25.283222 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:59:25.283231 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 5 15:59:25.283239 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:59:25.283246 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:59:25.283256 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:59:25.283263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:59:25.283270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:59:25.283277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:59:25.283285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:59:25.283292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:59:25.283299 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:59:25.283309 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:59:25.283316 kernel: TSC deadline timer available Nov 5 15:59:25.283323 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:59:25.283331 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:59:25.283338 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:59:25.283345 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:59:25.283352 kernel: CPU topo: Num. cores per package: 2 Nov 5 15:59:25.283362 kernel: CPU topo: Num. threads per package: 2 Nov 5 15:59:25.283368 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 15:59:25.283375 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:59:25.283382 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 15:59:25.283390 kernel: kvm-guest: setup PV sched yield Nov 5 15:59:25.283397 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 5 15:59:25.283404 kernel: Booting paravirtualized kernel on KVM Nov 5 15:59:25.283412 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:59:25.283421 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 15:59:25.283428 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 15:59:25.283436 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 15:59:25.283443 kernel: pcpu-alloc: [0] 0 1 Nov 5 15:59:25.283450 kernel: kvm-guest: PV spinlocks enabled Nov 5 15:59:25.283950 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 15:59:25.283962 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:59:25.283975 kernel: random: crng init done Nov 5 15:59:25.283983 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:59:25.283990 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:59:25.283997 kernel: Fallback order for Node 0: 0 Nov 5 15:59:25.284005 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 5 15:59:25.284012 kernel: Policy zone: Normal Nov 5 15:59:25.284019 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:59:25.284029 kernel: software IO TLB: area num 2. Nov 5 15:59:25.284036 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:59:25.284044 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:59:25.284051 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:59:25.284058 kernel: Dynamic Preempt: voluntary Nov 5 15:59:25.284066 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:59:25.284077 kernel: rcu: RCU event tracing is enabled. Nov 5 15:59:25.284087 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:59:25.284095 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:59:25.284102 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:59:25.284109 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:59:25.284117 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:59:25.284124 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:59:25.284131 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:59:25.284147 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:59:25.284155 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:59:25.284163 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 15:59:25.284173 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:59:25.284180 kernel: Console: colour VGA+ 80x25 Nov 5 15:59:25.284188 kernel: printk: legacy console [tty0] enabled Nov 5 15:59:25.284195 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:59:25.284203 kernel: ACPI: Core revision 20240827 Nov 5 15:59:25.284213 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:59:25.284220 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:59:25.284228 kernel: x2apic enabled Nov 5 15:59:25.284236 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:59:25.284243 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 15:59:25.284251 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 15:59:25.284261 kernel: kvm-guest: setup PV IPIs Nov 5 15:59:25.284268 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:59:25.284276 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 5 15:59:25.284284 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Nov 5 15:59:25.284291 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 15:59:25.284299 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 15:59:25.284307 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 15:59:25.284316 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:59:25.284324 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:59:25.284331 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:59:25.284339 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 5 15:59:25.284347 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:59:25.284354 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:59:25.284362 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 15:59:25.284372 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 15:59:25.284380 kernel: active return thunk: srso_alias_return_thunk Nov 5 15:59:25.284388 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 15:59:25.284395 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 5 15:59:25.284403 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 15:59:25.284411 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:59:25.284418 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:59:25.284428 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:59:25.284435 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 5 15:59:25.284443 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:59:25.284451 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 5 15:59:25.284458 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 5 15:59:25.284466 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:59:25.284473 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:59:25.284483 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:59:25.284491 kernel: landlock: Up and running. Nov 5 15:59:25.284498 kernel: SELinux: Initializing. Nov 5 15:59:25.284721 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:59:25.284735 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:59:25.284743 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 5 15:59:25.284751 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 15:59:25.284763 kernel: ... version: 0 Nov 5 15:59:25.284770 kernel: ... bit width: 48 Nov 5 15:59:25.284778 kernel: ... generic registers: 6 Nov 5 15:59:25.284786 kernel: ... value mask: 0000ffffffffffff Nov 5 15:59:25.284793 kernel: ... max period: 00007fffffffffff Nov 5 15:59:25.284801 kernel: ... fixed-purpose events: 0 Nov 5 15:59:25.284808 kernel: ... event mask: 000000000000003f Nov 5 15:59:25.284818 kernel: signal: max sigframe size: 3376 Nov 5 15:59:25.284826 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:59:25.284834 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:59:25.284841 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:59:25.284849 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:59:25.284856 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:59:25.284864 kernel: .... node #0, CPUs: #1 Nov 5 15:59:25.284873 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:59:25.284881 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 5 15:59:25.284889 kernel: Memory: 3984336K/4193772K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 204760K reserved, 0K cma-reserved) Nov 5 15:59:25.284897 kernel: devtmpfs: initialized Nov 5 15:59:25.284904 kernel: x86/mm: Memory block size: 128MB Nov 5 15:59:25.284912 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:59:25.284920 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:59:25.284929 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:59:25.284937 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:59:25.284944 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:59:25.284952 kernel: audit: type=2000 audit(1762358362.670:1): state=initialized audit_enabled=0 res=1 Nov 5 15:59:25.284959 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:59:25.284967 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:59:25.284974 kernel: cpuidle: using governor menu Nov 5 15:59:25.284984 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:59:25.284991 kernel: dca service started, version 1.12.1 Nov 5 15:59:25.284999 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 5 15:59:25.285007 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 5 15:59:25.285015 kernel: PCI: Using configuration type 1 for base access Nov 5 15:59:25.285022 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:59:25.285030 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:59:25.285039 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:59:25.285047 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:59:25.285055 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:59:25.285062 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:59:25.285070 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:59:25.285077 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:59:25.285085 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:59:25.285092 kernel: ACPI: Interpreter enabled Nov 5 15:59:25.285102 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 15:59:25.285109 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:59:25.285117 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:59:25.285124 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:59:25.285132 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 15:59:25.285140 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:59:25.285380 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:59:25.285607 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 15:59:25.285991 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 15:59:25.286003 kernel: PCI host bridge to bus 0000:00 Nov 5 15:59:25.286188 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:59:25.286406 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:59:25.286615 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:59:25.286959 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 5 15:59:25.287682 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 15:59:25.288255 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 5 15:59:25.292764 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:59:25.292972 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:59:25.293168 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:59:25.293531 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 5 15:59:25.294294 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 5 15:59:25.294695 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 5 15:59:25.294881 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:59:25.295075 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:59:25.295252 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 5 15:59:25.295427 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 5 15:59:25.295853 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 5 15:59:25.296521 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:59:25.296750 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 5 15:59:25.297002 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 5 15:59:25.297187 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 5 15:59:25.297363 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 5 15:59:25.297549 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:59:25.297973 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 15:59:25.298168 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 15:59:25.298345 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 5 15:59:25.298525 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 5 15:59:25.298941 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 15:59:25.299123 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 5 15:59:25.299135 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:59:25.299148 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:59:25.299156 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:59:25.299164 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:59:25.299172 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 15:59:25.299180 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 15:59:25.299187 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 15:59:25.299195 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 15:59:25.299206 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 15:59:25.299214 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 15:59:25.299221 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 15:59:25.299229 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 15:59:25.299236 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 15:59:25.299244 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 15:59:25.299252 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 15:59:25.299262 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 15:59:25.299271 kernel: iommu: Default domain type: Translated Nov 5 15:59:25.299278 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:59:25.299286 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:59:25.299294 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:59:25.299302 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 5 15:59:25.299310 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 5 15:59:25.299768 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 15:59:25.300003 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 15:59:25.300180 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:59:25.300191 kernel: vgaarb: loaded Nov 5 15:59:25.300200 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:59:25.300209 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:59:25.300217 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:59:25.300230 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:59:25.300238 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:59:25.300246 kernel: pnp: PnP ACPI init Nov 5 15:59:25.300435 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 5 15:59:25.300447 kernel: pnp: PnP ACPI: found 5 devices Nov 5 15:59:25.300456 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:59:25.300467 kernel: NET: Registered PF_INET protocol family Nov 5 15:59:25.300674 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:59:25.300689 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 15:59:25.300698 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:59:25.300706 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:59:25.300714 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 15:59:25.300722 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 15:59:25.300736 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:59:25.300744 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:59:25.300752 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:59:25.300760 kernel: NET: Registered PF_XDP protocol family Nov 5 15:59:25.300940 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:59:25.301104 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:59:25.301266 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:59:25.301672 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 5 15:59:25.301842 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 5 15:59:25.302002 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 5 15:59:25.302013 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:59:25.302021 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 5 15:59:25.302029 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 5 15:59:25.302037 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Nov 5 15:59:25.302049 kernel: Initialise system trusted keyrings Nov 5 15:59:25.302057 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 15:59:25.302065 kernel: Key type asymmetric registered Nov 5 15:59:25.302073 kernel: Asymmetric key parser 'x509' registered Nov 5 15:59:25.302080 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:59:25.302088 kernel: io scheduler mq-deadline registered Nov 5 15:59:25.302096 kernel: io scheduler kyber registered Nov 5 15:59:25.302106 kernel: io scheduler bfq registered Nov 5 15:59:25.302114 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:59:25.302122 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 15:59:25.302130 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 15:59:25.302138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:59:25.302146 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:59:25.302154 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:59:25.302164 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:59:25.302172 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:59:25.302350 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 15:59:25.302362 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:59:25.302807 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 15:59:25.302982 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T15:59:23 UTC (1762358363) Nov 5 15:59:25.303157 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 5 15:59:25.303167 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 15:59:25.303176 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:59:25.303184 kernel: Segment Routing with IPv6 Nov 5 15:59:25.303192 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:59:25.303200 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:59:25.303208 kernel: Key type dns_resolver registered Nov 5 15:59:25.303219 kernel: IPI shorthand broadcast: enabled Nov 5 15:59:25.303227 kernel: sched_clock: Marking stable (1338006190, 382756070)->(1817569080, -96806820) Nov 5 15:59:25.303235 kernel: registered taskstats version 1 Nov 5 15:59:25.303243 kernel: Loading compiled-in X.509 certificates Nov 5 15:59:25.303250 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:59:25.303258 kernel: Demotion targets for Node 0: null Nov 5 15:59:25.303266 kernel: Key type .fscrypt registered Nov 5 15:59:25.303274 kernel: Key type fscrypt-provisioning registered Nov 5 15:59:25.303284 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:59:25.303292 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:59:25.303299 kernel: ima: No architecture policies found Nov 5 15:59:25.303307 kernel: clk: Disabling unused clocks Nov 5 15:59:25.303315 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:59:25.303323 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:59:25.303333 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:59:25.303340 kernel: Run /init as init process Nov 5 15:59:25.303348 kernel: with arguments: Nov 5 15:59:25.303356 kernel: /init Nov 5 15:59:25.303364 kernel: with environment: Nov 5 15:59:25.303372 kernel: HOME=/ Nov 5 15:59:25.303394 kernel: TERM=linux Nov 5 15:59:25.303404 kernel: SCSI subsystem initialized Nov 5 15:59:25.303414 kernel: libata version 3.00 loaded. Nov 5 15:59:25.303626 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 15:59:25.303643 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 15:59:25.303825 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 15:59:25.304002 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 15:59:25.304178 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 15:59:25.304386 kernel: scsi host0: ahci Nov 5 15:59:25.306391 kernel: scsi host1: ahci Nov 5 15:59:25.306794 kernel: scsi host2: ahci Nov 5 15:59:25.306992 kernel: scsi host3: ahci Nov 5 15:59:25.307180 kernel: scsi host4: ahci Nov 5 15:59:25.307376 kernel: scsi host5: ahci Nov 5 15:59:25.307389 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Nov 5 15:59:25.307398 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Nov 5 15:59:25.307407 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Nov 5 15:59:25.307416 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Nov 5 15:59:25.307424 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Nov 5 15:59:25.307432 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Nov 5 15:59:25.307444 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 5 15:59:25.307452 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 15:59:25.307460 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 15:59:25.307470 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 15:59:25.307478 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 15:59:25.307486 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 15:59:25.307716 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 5 15:59:25.307920 kernel: scsi host6: Virtio SCSI HBA Nov 5 15:59:25.308128 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 5 15:59:25.308331 kernel: sd 6:0:0:0: Power-on or device reset occurred Nov 5 15:59:25.308525 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 5 15:59:25.308772 kernel: sd 6:0:0:0: [sda] Write Protect is off Nov 5 15:59:25.308979 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 5 15:59:25.309173 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 5 15:59:25.309185 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:59:25.309194 kernel: GPT:25804799 != 167739391 Nov 5 15:59:25.309202 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:59:25.309211 kernel: GPT:25804799 != 167739391 Nov 5 15:59:25.309219 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:59:25.309230 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 15:59:25.309657 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Nov 5 15:59:25.309675 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:59:25.309684 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:59:25.309693 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:59:25.309702 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:59:25.309714 kernel: raid6: avx2x4 gen() 31681 MB/s Nov 5 15:59:25.309725 kernel: raid6: avx2x2 gen() 29501 MB/s Nov 5 15:59:25.309733 kernel: raid6: avx2x1 gen() 19763 MB/s Nov 5 15:59:25.309741 kernel: raid6: using algorithm avx2x4 gen() 31681 MB/s Nov 5 15:59:25.309749 kernel: raid6: .... xor() 4409 MB/s, rmw enabled Nov 5 15:59:25.309759 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:59:25.309768 kernel: xor: automatically using best checksumming function avx Nov 5 15:59:25.309776 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:59:25.309784 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (167) Nov 5 15:59:25.309792 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:59:25.309801 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:59:25.309809 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 15:59:25.309819 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:59:25.309827 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:59:25.309835 kernel: loop: module loaded Nov 5 15:59:25.309844 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:59:25.309852 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:59:25.309862 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:59:25.309875 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:59:25.309884 systemd[1]: Detected virtualization kvm. Nov 5 15:59:25.309893 systemd[1]: Detected architecture x86-64. Nov 5 15:59:25.309902 systemd[1]: Running in initrd. Nov 5 15:59:25.309910 systemd[1]: No hostname configured, using default hostname. Nov 5 15:59:25.309919 systemd[1]: Hostname set to . Nov 5 15:59:25.309929 systemd[1]: Initializing machine ID from random generator. Nov 5 15:59:25.309938 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:59:25.309946 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:59:25.309955 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:59:25.309963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:59:25.309973 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:59:25.309982 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:59:25.309993 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:59:25.310002 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:59:25.310010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:59:25.310019 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:59:25.310028 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:59:25.310039 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:59:25.310047 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:59:25.310055 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:59:25.310064 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:59:25.310073 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:59:25.310081 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:59:25.310090 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:59:25.310100 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:59:25.310109 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:59:25.310117 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:59:25.310126 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:59:25.310134 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:59:25.310143 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:59:25.310151 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:59:25.310162 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:59:25.310170 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:59:25.310179 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:59:25.310188 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:59:25.310196 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:59:25.310205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:59:25.310213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:59:25.310224 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:59:25.310233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:59:25.310241 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:59:25.310252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:59:25.310289 systemd-journald[303]: Collecting audit messages is disabled. Nov 5 15:59:25.310310 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:59:25.310322 systemd-journald[303]: Journal started Nov 5 15:59:25.310527 systemd-journald[303]: Runtime Journal (/run/log/journal/d580bbe6f89346a486befc865e160984) is 8M, max 78.2M, 70.2M free. Nov 5 15:59:25.314625 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:59:25.317817 kernel: Bridge firewalling registered Nov 5 15:59:25.317357 systemd-modules-load[304]: Inserted module 'br_netfilter' Nov 5 15:59:25.408129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:59:25.409424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:59:25.412119 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:59:25.417780 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:59:25.421707 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:59:25.428170 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:59:25.436610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:59:25.447493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:59:25.454196 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:59:25.458417 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:59:25.458669 systemd-tmpfiles[325]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:59:25.469754 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:59:25.471181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:59:25.478696 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:59:25.502619 dracut-cmdline[343]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:59:25.524964 systemd-resolved[338]: Positive Trust Anchors: Nov 5 15:59:25.524977 systemd-resolved[338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:59:25.524982 systemd-resolved[338]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:59:25.525009 systemd-resolved[338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:59:25.556381 systemd-resolved[338]: Defaulting to hostname 'linux'. Nov 5 15:59:25.559140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:59:25.560109 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:59:25.627590 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:59:25.643590 kernel: iscsi: registered transport (tcp) Nov 5 15:59:25.666033 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:59:25.666071 kernel: QLogic iSCSI HBA Driver Nov 5 15:59:25.697907 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:59:25.716524 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:59:25.721054 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:59:25.773910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:59:25.776335 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:59:25.778986 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:59:25.814873 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:59:25.819287 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:59:25.852468 systemd-udevd[585]: Using default interface naming scheme 'v257'. Nov 5 15:59:25.867693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:59:25.874331 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:59:25.893335 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:59:25.917518 dracut-pre-trigger[661]: rd.md=0: removing MD RAID activation Nov 5 15:59:25.920283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:59:25.935062 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:59:25.940643 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:59:25.973094 systemd-networkd[708]: lo: Link UP Nov 5 15:59:25.973103 systemd-networkd[708]: lo: Gained carrier Nov 5 15:59:25.975054 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:59:25.975994 systemd[1]: Reached target network.target - Network. Nov 5 15:59:26.045454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:59:26.050435 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:59:26.211907 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 5 15:59:26.382458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 15:59:26.409693 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:59:26.404105 systemd-networkd[708]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:59:26.404111 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:59:26.405908 systemd-networkd[708]: eth0: Link UP Nov 5 15:59:26.406437 systemd-networkd[708]: eth0: Gained carrier Nov 5 15:59:26.421676 kernel: AES CTR mode by8 optimization enabled Nov 5 15:59:26.406449 systemd-networkd[708]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:59:26.436613 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:59:26.440270 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 5 15:59:26.460842 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 5 15:59:26.473441 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:59:26.474570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:59:26.474645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:59:26.479439 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:59:26.488683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:59:26.492630 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:59:26.495597 disk-uuid[827]: Primary Header is updated. Nov 5 15:59:26.495597 disk-uuid[827]: Secondary Entries is updated. Nov 5 15:59:26.495597 disk-uuid[827]: Secondary Header is updated. Nov 5 15:59:26.496491 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:59:26.501102 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:59:26.505584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:59:26.510303 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:59:26.722875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:59:26.738073 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:59:27.242637 systemd-networkd[708]: eth0: DHCPv4 address 172.238.168.232/24, gateway 172.238.168.1 acquired from 23.205.167.177 Nov 5 15:59:27.592887 disk-uuid[830]: Warning: The kernel is still using the old partition table. Nov 5 15:59:27.592887 disk-uuid[830]: The new table will be used at the next reboot or after you Nov 5 15:59:27.592887 disk-uuid[830]: run partprobe(8) or kpartx(8) Nov 5 15:59:27.592887 disk-uuid[830]: The operation has completed successfully. Nov 5 15:59:27.599698 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:59:27.599865 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:59:27.602855 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:59:27.640603 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (857) Nov 5 15:59:27.645524 systemd-networkd[708]: eth0: Gained IPv6LL Nov 5 15:59:27.650300 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:59:27.650329 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:59:27.654734 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:59:27.654758 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:59:27.657448 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:59:27.669652 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:59:27.670168 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:59:27.673080 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:59:27.803257 ignition[876]: Ignition 2.22.0 Nov 5 15:59:27.803279 ignition[876]: Stage: fetch-offline Nov 5 15:59:27.803321 ignition[876]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:59:27.803334 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:59:27.808653 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:59:27.803621 ignition[876]: parsed url from cmdline: "" Nov 5 15:59:27.803626 ignition[876]: no config URL provided Nov 5 15:59:27.803632 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:59:27.803648 ignition[876]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:59:27.812805 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:59:27.803654 ignition[876]: failed to fetch config: resource requires networking Nov 5 15:59:27.803800 ignition[876]: Ignition finished successfully Nov 5 15:59:27.845051 ignition[882]: Ignition 2.22.0 Nov 5 15:59:27.845061 ignition[882]: Stage: fetch Nov 5 15:59:27.845172 ignition[882]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:59:27.845182 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:59:27.845439 ignition[882]: parsed url from cmdline: "" Nov 5 15:59:27.845443 ignition[882]: no config URL provided Nov 5 15:59:27.845449 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:59:27.845457 ignition[882]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:59:27.845489 ignition[882]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 5 15:59:27.946158 ignition[882]: PUT result: OK Nov 5 15:59:27.946223 ignition[882]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 5 15:59:28.053155 ignition[882]: GET result: OK Nov 5 15:59:28.053862 ignition[882]: parsing config with SHA512: 00959c7e01d48bc63d7504f17fbc40183ac3c7b0b9068d5cb26d76dd34c3f0596a92d9af7e8b0f18c3c460204e703f6b5758ded4d388421630a20f4c00414825 Nov 5 15:59:28.062891 unknown[882]: fetched base config from "system" Nov 5 15:59:28.062912 unknown[882]: fetched base config from "system" Nov 5 15:59:28.063264 ignition[882]: fetch: fetch complete Nov 5 15:59:28.062921 unknown[882]: fetched user config from "akamai" Nov 5 15:59:28.063271 ignition[882]: fetch: fetch passed Nov 5 15:59:28.067136 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:59:28.063320 ignition[882]: Ignition finished successfully Nov 5 15:59:28.072752 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:59:28.109907 ignition[889]: Ignition 2.22.0 Nov 5 15:59:28.109931 ignition[889]: Stage: kargs Nov 5 15:59:28.110071 ignition[889]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:59:28.110084 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:59:28.113539 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:59:28.110718 ignition[889]: kargs: kargs passed Nov 5 15:59:28.110770 ignition[889]: Ignition finished successfully Nov 5 15:59:28.117379 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:59:28.172516 ignition[896]: Ignition 2.22.0 Nov 5 15:59:28.172543 ignition[896]: Stage: disks Nov 5 15:59:28.172728 ignition[896]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:59:28.172740 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:59:28.173516 ignition[896]: disks: disks passed Nov 5 15:59:28.173582 ignition[896]: Ignition finished successfully Nov 5 15:59:28.177774 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:59:28.202916 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:59:28.204350 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:59:28.206862 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:59:28.209257 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:59:28.211344 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:59:28.215100 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:59:28.268585 systemd-fsck[905]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 15:59:28.270870 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:59:28.275810 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:59:28.403608 kernel: EXT4-fs (sda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:59:28.404495 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:59:28.406079 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:59:28.409129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:59:28.412083 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:59:28.415055 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:59:28.415096 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:59:28.415122 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:59:28.426852 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:59:28.429969 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:59:28.443107 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (913) Nov 5 15:59:28.443134 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:59:28.443146 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:59:28.449159 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:59:28.449189 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:59:28.452020 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:59:28.456922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:59:28.505768 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:59:28.511622 initrd-setup-root[944]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:59:28.516911 initrd-setup-root[951]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:59:28.521834 initrd-setup-root[958]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:59:28.630609 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:59:28.634669 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:59:28.637731 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:59:28.654705 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:59:28.660627 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:59:28.681663 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:59:28.703097 ignition[1028]: INFO : Ignition 2.22.0 Nov 5 15:59:28.703097 ignition[1028]: INFO : Stage: mount Nov 5 15:59:28.705506 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:59:28.705506 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:59:28.705506 ignition[1028]: INFO : mount: mount passed Nov 5 15:59:28.705506 ignition[1028]: INFO : Ignition finished successfully Nov 5 15:59:28.706426 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:59:28.708678 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:59:28.731047 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:59:28.756600 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1039) Nov 5 15:59:28.761747 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:59:28.761777 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:59:28.770696 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:59:28.770722 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:59:28.775616 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:59:28.778394 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:59:28.816238 ignition[1055]: INFO : Ignition 2.22.0 Nov 5 15:59:28.816238 ignition[1055]: INFO : Stage: files Nov 5 15:59:28.818681 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:59:28.818681 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:59:28.818681 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:59:28.823084 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:59:28.823084 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:59:28.826195 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:59:28.826195 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:59:28.829149 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:59:28.827985 unknown[1055]: wrote ssh authorized keys file for user: core Nov 5 15:59:28.831825 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:59:28.831825 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:59:29.017899 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:59:29.082480 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:59:29.084674 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 15:59:29.084674 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 5 15:59:29.230155 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 5 15:59:29.304343 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 15:59:29.304343 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:59:29.308291 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 15:59:29.744684 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 5 15:59:30.062185 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:59:30.062185 ignition[1055]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:59:30.087904 ignition[1055]: INFO : files: files passed Nov 5 15:59:30.087904 ignition[1055]: INFO : Ignition finished successfully Nov 5 15:59:30.074423 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:59:30.090188 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:59:30.094801 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:59:30.115397 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:59:30.115852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:59:30.127606 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:59:30.127606 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:59:30.131890 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:59:30.131682 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:59:30.133888 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:59:30.136355 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:59:30.193676 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:59:30.193820 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:59:30.196124 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:59:30.197625 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:59:30.200022 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:59:30.200945 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:59:30.229172 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:59:30.232374 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:59:30.264029 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:59:30.264161 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:59:30.265546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:59:30.267830 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:59:30.269866 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:59:30.270013 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:59:30.272900 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:59:30.274117 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:59:30.276148 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:59:30.278294 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:59:30.280611 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:59:30.282589 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:59:30.285089 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:59:30.287352 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:59:30.290026 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:59:30.292292 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:59:30.294716 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:59:30.296910 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:59:30.297138 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:59:30.299673 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:59:30.301055 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:59:30.302950 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:59:30.303796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:59:30.306099 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:59:30.306305 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:59:30.309073 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:59:30.309292 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:59:30.311452 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:59:30.311634 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:59:30.315664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:59:30.320688 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:59:30.323135 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:59:30.324330 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:59:30.327180 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:59:30.327366 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:59:30.329291 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:59:30.329642 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:59:30.347384 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:59:30.348735 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:59:30.362613 ignition[1112]: INFO : Ignition 2.22.0 Nov 5 15:59:30.362613 ignition[1112]: INFO : Stage: umount Nov 5 15:59:30.362613 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:59:30.362613 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 15:59:30.370110 ignition[1112]: INFO : umount: umount passed Nov 5 15:59:30.370110 ignition[1112]: INFO : Ignition finished successfully Nov 5 15:59:30.365923 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:59:30.366056 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:59:30.368163 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:59:30.368244 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:59:30.371272 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:59:30.371324 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:59:30.372187 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:59:30.372242 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:59:30.374100 systemd[1]: Stopped target network.target - Network. Nov 5 15:59:30.399580 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:59:30.399643 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:59:30.401432 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:59:30.403112 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:59:30.407649 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:59:30.408787 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:59:30.410598 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:59:30.412722 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:59:30.412772 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:59:30.414849 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:59:30.414896 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:59:30.416595 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:59:30.416653 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:59:30.418577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:59:30.418636 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:59:30.420846 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:59:30.422597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:59:30.427692 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:59:30.428625 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:59:30.428742 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:59:30.431517 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:59:30.431678 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:59:30.434661 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:59:30.434783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:59:30.440343 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:59:30.441979 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:59:30.442031 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:59:30.443950 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:59:30.444011 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:59:30.446376 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:59:30.448034 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:59:30.448101 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:59:30.449052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:59:30.449103 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:59:30.453669 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:59:30.453729 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:59:30.455226 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:59:30.474583 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:59:30.478040 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:59:30.481479 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:59:30.481608 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:59:30.482796 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:59:30.482837 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:59:30.484619 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:59:30.484676 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:59:30.488229 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:59:30.488289 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:59:30.491056 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:59:30.491114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:59:30.494886 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:59:30.496178 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:59:30.496237 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:59:30.497187 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:59:30.497240 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:59:30.499708 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:59:30.499766 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:59:30.501157 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:59:30.501212 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:59:30.503060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:59:30.503114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:59:30.506940 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:59:30.507067 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:59:30.512137 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:59:30.512298 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:59:30.514908 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:59:30.518966 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:59:30.535244 systemd[1]: Switching root. Nov 5 15:59:30.574147 systemd-journald[303]: Journal stopped Nov 5 15:59:31.952155 systemd-journald[303]: Received SIGTERM from PID 1 (systemd). Nov 5 15:59:31.952186 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:59:31.952199 kernel: SELinux: policy capability open_perms=1 Nov 5 15:59:31.952209 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:59:31.952221 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:59:31.952231 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:59:31.952241 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:59:31.952251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:59:31.952261 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:59:31.952270 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:59:31.952282 kernel: audit: type=1403 audit(1762358370.731:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:59:31.952293 systemd[1]: Successfully loaded SELinux policy in 84.120ms. Nov 5 15:59:31.952304 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.883ms. Nov 5 15:59:31.952317 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:59:31.952330 systemd[1]: Detected virtualization kvm. Nov 5 15:59:31.952341 systemd[1]: Detected architecture x86-64. Nov 5 15:59:31.952351 systemd[1]: Detected first boot. Nov 5 15:59:31.952362 systemd[1]: Initializing machine ID from random generator. Nov 5 15:59:31.952374 zram_generator::config[1156]: No configuration found. Nov 5 15:59:31.952387 kernel: Guest personality initialized and is inactive Nov 5 15:59:31.952397 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:59:31.952407 kernel: Initialized host personality Nov 5 15:59:31.952417 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:59:31.952428 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:59:31.952438 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:59:31.952451 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:59:31.952461 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:59:31.952472 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:59:31.952483 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:59:31.952493 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:59:31.952504 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:59:31.952517 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:59:31.952528 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:59:31.952539 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:59:31.952549 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:59:31.952593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:59:31.952608 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:59:31.952620 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:59:31.952635 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:59:31.952646 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:59:31.952660 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:59:31.952671 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:59:31.952682 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:59:31.952694 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:59:31.952707 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:59:31.952719 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:59:31.952730 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:59:31.952741 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:59:31.952753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:59:31.952764 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:59:31.952777 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:59:31.952788 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:59:31.952798 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:59:31.952810 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:59:31.952821 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:59:31.952832 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:59:31.952845 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:59:31.952856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:59:31.952867 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:59:31.952878 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:59:31.952889 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:59:31.952902 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:59:31.952914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:59:31.952925 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:59:31.952936 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:59:31.952946 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:59:31.952958 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:59:31.952971 systemd[1]: Reached target machines.target - Containers. Nov 5 15:59:31.952982 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:59:31.952993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:59:31.953004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:59:31.953015 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:59:31.953026 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:59:31.953037 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:59:31.953050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:59:31.953061 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:59:31.953072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:59:31.953083 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:59:31.953094 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:59:31.953104 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:59:31.953115 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:59:31.953128 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:59:31.953141 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:59:31.953152 kernel: fuse: init (API version 7.41) Nov 5 15:59:31.953163 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:59:31.953173 kernel: ACPI: bus type drm_connector registered Nov 5 15:59:31.953184 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:59:31.953195 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:59:31.953208 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:59:31.953219 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:59:31.953230 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:59:31.953241 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:59:31.953252 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:59:31.953263 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:59:31.953296 systemd-journald[1244]: Collecting audit messages is disabled. Nov 5 15:59:31.953318 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:59:31.953329 systemd-journald[1244]: Journal started Nov 5 15:59:31.953350 systemd-journald[1244]: Runtime Journal (/run/log/journal/757aa36bfd83420e8940e2d11d1c7367) is 8M, max 78.2M, 70.2M free. Nov 5 15:59:31.488871 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:59:31.507619 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 15:59:31.508284 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:59:31.958717 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:59:31.961958 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:59:31.963718 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:59:31.965244 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:59:31.967091 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:59:31.969291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:59:31.971828 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:59:31.972274 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:59:31.974228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:59:31.974544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:59:31.976060 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:59:31.976695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:59:31.978039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:59:31.978521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:59:31.980119 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:59:31.980413 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:59:31.981822 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:59:31.982123 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:59:31.983945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:59:31.985743 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:59:31.989054 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:59:31.990789 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:59:32.012797 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:59:32.015386 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:59:32.019683 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:59:32.023937 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:59:32.025088 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:59:32.025178 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:59:32.028114 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:59:32.030289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:59:32.033222 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:59:32.040306 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:59:32.041674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:59:32.044195 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:59:32.047773 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:59:32.051842 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:59:32.055764 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:59:32.061824 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:59:32.066742 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:59:32.069002 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:59:32.120994 systemd-journald[1244]: Time spent on flushing to /var/log/journal/757aa36bfd83420e8940e2d11d1c7367 is 69.944ms for 989 entries. Nov 5 15:59:32.120994 systemd-journald[1244]: System Journal (/var/log/journal/757aa36bfd83420e8940e2d11d1c7367) is 8M, max 588.1M, 580.1M free. Nov 5 15:59:32.240729 systemd-journald[1244]: Received client request to flush runtime journal. Nov 5 15:59:32.240769 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 15:59:32.240793 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 15:59:32.240808 kernel: loop3: detected capacity change from 0 to 8 Nov 5 15:59:32.132128 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:59:32.134225 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:59:32.143885 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:59:32.150694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:59:32.162777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:59:32.172952 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Nov 5 15:59:32.172964 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Nov 5 15:59:32.189232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:59:32.201838 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:59:32.204716 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:59:32.247666 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:59:32.264643 kernel: loop4: detected capacity change from 0 to 229808 Nov 5 15:59:32.281732 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:59:32.286438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:59:32.291843 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:59:32.310704 kernel: loop5: detected capacity change from 0 to 128048 Nov 5 15:59:32.309778 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:59:32.328577 kernel: loop6: detected capacity change from 0 to 110984 Nov 5 15:59:32.336118 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Nov 5 15:59:32.337146 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Nov 5 15:59:32.356817 kernel: loop7: detected capacity change from 0 to 8 Nov 5 15:59:32.353796 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:59:32.364588 kernel: loop1: detected capacity change from 0 to 229808 Nov 5 15:59:32.381629 (sd-merge)[1305]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Nov 5 15:59:32.391926 (sd-merge)[1305]: Merged extensions into '/usr'. Nov 5 15:59:32.398362 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:59:32.399814 systemd[1]: Reload requested from client PID 1281 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:59:32.399884 systemd[1]: Reloading... Nov 5 15:59:32.487717 zram_generator::config[1340]: No configuration found. Nov 5 15:59:32.589468 systemd-resolved[1303]: Positive Trust Anchors: Nov 5 15:59:32.589483 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:59:32.589489 systemd-resolved[1303]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:59:32.589517 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:59:32.596811 systemd-resolved[1303]: Defaulting to hostname 'linux'. Nov 5 15:59:32.752956 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:59:32.753187 systemd[1]: Reloading finished in 352 ms. Nov 5 15:59:32.789079 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:59:32.790381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:59:32.791902 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:59:32.797898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:59:32.804234 systemd[1]: Starting ensure-sysext.service... Nov 5 15:59:32.808701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:59:32.820876 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:59:32.842755 systemd[1]: Reload requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:59:32.842774 systemd[1]: Reloading... Nov 5 15:59:32.843111 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:59:32.843150 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:59:32.843456 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:59:32.843768 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:59:32.845254 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:59:32.845760 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Nov 5 15:59:32.845891 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Nov 5 15:59:32.859044 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:59:32.859066 systemd-tmpfiles[1385]: Skipping /boot Nov 5 15:59:32.886757 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:59:32.886822 systemd-tmpfiles[1385]: Skipping /boot Nov 5 15:59:32.886977 systemd-udevd[1386]: Using default interface naming scheme 'v257'. Nov 5 15:59:33.008606 zram_generator::config[1441]: No configuration found. Nov 5 15:59:33.134592 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:59:33.153588 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:59:33.169604 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:59:33.187495 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 15:59:33.187852 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:59:33.310025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 15:59:33.311048 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:59:33.311252 systemd[1]: Reloading finished in 468 ms. Nov 5 15:59:33.321128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:59:33.323663 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:59:33.349584 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:59:33.384982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:59:33.386766 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:59:33.391855 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:59:33.393532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:59:33.397721 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:59:33.403826 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:59:33.408715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:59:33.421143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:59:33.427874 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:59:33.435326 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:59:33.437665 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:59:33.442073 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:59:33.454195 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:59:33.460741 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:59:33.463312 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:59:33.491067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:59:33.491521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:59:33.504721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:59:33.506744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:59:33.506889 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:59:33.507052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:59:33.509383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:59:33.514704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:59:33.524510 systemd[1]: Finished ensure-sysext.service. Nov 5 15:59:33.546741 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:59:33.557857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:59:33.583914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:59:33.585020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:59:33.587817 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:59:33.590352 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:59:33.592860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:59:33.606006 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:59:33.621632 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:59:33.623253 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:59:33.623710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:59:33.627029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:59:33.636378 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:59:33.710828 augenrules[1554]: No rules Nov 5 15:59:33.713693 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:59:33.714056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:59:33.732055 systemd-networkd[1516]: lo: Link UP Nov 5 15:59:33.732381 systemd-networkd[1516]: lo: Gained carrier Nov 5 15:59:33.742433 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:59:33.744041 systemd[1]: Reached target network.target - Network. Nov 5 15:59:33.745129 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:59:33.745137 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:59:33.749255 systemd-networkd[1516]: eth0: Link UP Nov 5 15:59:33.749734 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:59:33.749907 systemd-networkd[1516]: eth0: Gained carrier Nov 5 15:59:33.749923 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:59:33.756080 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:59:33.758896 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:59:33.761795 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:59:33.770732 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:59:33.771744 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:59:33.790179 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:59:33.910928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:59:34.107094 ldconfig[1506]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:59:34.111322 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:59:34.114631 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:59:34.133807 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:59:34.135378 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:59:34.136377 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:59:34.137487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:59:34.138458 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:59:34.139737 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:59:34.140857 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:59:34.141982 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:59:34.143086 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:59:34.143138 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:59:34.143965 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:59:34.145679 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:59:34.147815 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:59:34.150529 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:59:34.151825 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:59:34.152860 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:59:34.155970 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:59:34.157128 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:59:34.158875 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:59:34.160722 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:59:34.161522 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:59:34.162383 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:59:34.162431 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:59:34.163638 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:59:34.165880 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:59:34.173702 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:59:34.175878 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:59:34.179747 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:59:34.187889 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:59:34.188922 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:59:34.194988 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:59:34.205366 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:59:34.210645 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:59:34.216207 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:59:34.223875 jq[1577]: false Nov 5 15:59:34.225525 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:59:34.234647 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing passwd entry cache Nov 5 15:59:34.241096 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:59:34.242429 oslogin_cache_refresh[1579]: Refreshing passwd entry cache Nov 5 15:59:34.243646 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:59:34.244122 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:59:34.245890 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:59:34.248508 extend-filesystems[1578]: Found /dev/sda6 Nov 5 15:59:34.258745 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:59:34.260528 oslogin_cache_refresh[1579]: Failure getting users, quitting Nov 5 15:59:34.262696 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting users, quitting Nov 5 15:59:34.262696 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:59:34.262696 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Refreshing group entry cache Nov 5 15:59:34.262696 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Failure getting groups, quitting Nov 5 15:59:34.262696 google_oslogin_nss_cache[1579]: oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:59:34.260546 oslogin_cache_refresh[1579]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:59:34.260615 oslogin_cache_refresh[1579]: Refreshing group entry cache Nov 5 15:59:34.261079 oslogin_cache_refresh[1579]: Failure getting groups, quitting Nov 5 15:59:34.261089 oslogin_cache_refresh[1579]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:59:34.265461 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:59:34.266834 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:59:34.268021 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:59:34.268368 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:59:34.268798 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:59:34.273330 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:59:34.273693 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:59:34.283668 extend-filesystems[1578]: Found /dev/sda9 Nov 5 15:59:34.289147 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:59:34.291851 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:59:34.299582 update_engine[1597]: I20251105 15:59:34.298331 1597 main.cc:92] Flatcar Update Engine starting Nov 5 15:59:34.299855 jq[1599]: true Nov 5 15:59:34.314498 extend-filesystems[1578]: Checking size of /dev/sda9 Nov 5 15:59:34.320460 jq[1620]: true Nov 5 15:59:34.321860 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:59:34.340216 tar[1604]: linux-amd64/LICENSE Nov 5 15:59:34.340216 tar[1604]: linux-amd64/helm Nov 5 15:59:34.347114 coreos-metadata[1574]: Nov 05 15:59:34.343 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 5 15:59:34.365667 extend-filesystems[1578]: Resized partition /dev/sda9 Nov 5 15:59:34.371061 extend-filesystems[1632]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:59:34.390871 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Nov 5 15:59:34.401093 dbus-daemon[1575]: [system] SELinux support is enabled Nov 5 15:59:34.403273 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:59:34.410909 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:59:34.410940 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:59:34.413240 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:59:34.413255 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:59:34.443176 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:59:34.444871 update_engine[1597]: I20251105 15:59:34.444180 1597 update_check_scheduler.cc:74] Next update check in 3m55s Nov 5 15:59:34.449639 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:59:34.472652 systemd-logind[1592]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:59:34.472683 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:59:34.474713 systemd-logind[1592]: New seat seat0. Nov 5 15:59:34.475747 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:59:34.537257 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:59:34.547163 systemd-networkd[1516]: eth0: DHCPv4 address 172.238.168.232/24, gateway 172.238.168.1 acquired from 23.205.167.177 Nov 5 15:59:34.547347 dbus-daemon[1575]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1516 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 15:59:34.547884 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:59:34.561031 systemd[1]: Starting sshkeys.service... Nov 5 15:59:34.561791 systemd-timesyncd[1526]: Network configuration changed, trying to establish connection. Nov 5 15:59:34.566587 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 15:59:35.365348 systemd-timesyncd[1526]: Contacted time server 193.29.63.226:123 (0.flatcar.pool.ntp.org). Nov 5 15:59:35.365425 systemd-timesyncd[1526]: Initial clock synchronization to Wed 2025-11-05 15:59:35.365069 UTC. Nov 5 15:59:35.370082 systemd-resolved[1303]: Clock change detected. Flushing caches. Nov 5 15:59:35.383178 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:59:35.387622 sshd_keygen[1602]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:59:35.388404 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:59:35.539264 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:59:35.547178 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:59:35.609406 coreos-metadata[1653]: Nov 05 15:59:35.609 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 5 15:59:35.618290 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:59:35.620353 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:59:35.631683 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:59:35.646804 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 15:59:35.653446 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 15:59:35.654693 dbus-daemon[1575]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1652 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 15:59:35.664050 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 15:59:35.682097 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:59:35.686230 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:59:35.691804 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:59:35.695336 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:59:35.716386 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Nov 5 15:59:35.723055 coreos-metadata[1653]: Nov 05 15:59:35.722 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 5 15:59:35.734154 extend-filesystems[1632]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 5 15:59:35.734154 extend-filesystems[1632]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 5 15:59:35.734154 extend-filesystems[1632]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Nov 5 15:59:35.742101 extend-filesystems[1578]: Resized filesystem in /dev/sda9 Nov 5 15:59:35.735640 locksmithd[1647]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:59:35.739328 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:59:35.739613 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:59:35.757939 containerd[1607]: time="2025-11-05T15:59:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:59:35.760753 containerd[1607]: time="2025-11-05T15:59:35.760700999Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:59:35.781038 containerd[1607]: time="2025-11-05T15:59:35.780447349Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.11µs" Nov 5 15:59:35.781038 containerd[1607]: time="2025-11-05T15:59:35.780474229Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:59:35.781038 containerd[1607]: time="2025-11-05T15:59:35.780490489Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:59:35.781038 containerd[1607]: time="2025-11-05T15:59:35.780650119Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:59:35.781038 containerd[1607]: time="2025-11-05T15:59:35.780670029Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:59:35.781038 containerd[1607]: time="2025-11-05T15:59:35.780695819Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:59:35.781038 containerd[1607]: time="2025-11-05T15:59:35.780766409Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:59:35.781038 containerd[1607]: time="2025-11-05T15:59:35.780777989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:59:35.781814 containerd[1607]: time="2025-11-05T15:59:35.781780559Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:59:35.781852 containerd[1607]: time="2025-11-05T15:59:35.781808979Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:59:35.781852 containerd[1607]: time="2025-11-05T15:59:35.781831709Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:59:35.781852 containerd[1607]: time="2025-11-05T15:59:35.781843969Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:59:35.782024 containerd[1607]: time="2025-11-05T15:59:35.781992429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:59:35.782459 containerd[1607]: time="2025-11-05T15:59:35.782423359Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:59:35.782495 containerd[1607]: time="2025-11-05T15:59:35.782474639Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:59:35.782495 containerd[1607]: time="2025-11-05T15:59:35.782490069Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:59:35.782532 containerd[1607]: time="2025-11-05T15:59:35.782517949Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:59:35.786113 containerd[1607]: time="2025-11-05T15:59:35.786079799Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:59:35.786197 containerd[1607]: time="2025-11-05T15:59:35.786166929Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790216959Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790254039Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790266459Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790276809Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790288009Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790296449Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790306919Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790317889Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790327699Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790336469Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790344619Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790354649Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790464159Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:59:35.790997 containerd[1607]: time="2025-11-05T15:59:35.790482119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790494739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790508609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790521949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790531169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790540379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790552849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790563159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790571699Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790580099Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790630149Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790641349Z" level=info msg="Start snapshots syncer" Nov 5 15:59:35.791241 containerd[1607]: time="2025-11-05T15:59:35.790661229Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:59:35.791446 containerd[1607]: time="2025-11-05T15:59:35.791025089Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:59:35.791446 containerd[1607]: time="2025-11-05T15:59:35.791069509Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791126449Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791222609Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791253339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791263339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791272609Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791287069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791295989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791306899Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791324389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791333019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791342409Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791367819Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791378569Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:59:35.791580 containerd[1607]: time="2025-11-05T15:59:35.791385679Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791393959Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791400939Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791423999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791435499Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791450339Z" level=info msg="runtime interface created" Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791456139Z" level=info msg="created NRI interface" Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791464059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791473139Z" level=info msg="Connect containerd service" Nov 5 15:59:35.791809 containerd[1607]: time="2025-11-05T15:59:35.791496309Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:59:35.795204 containerd[1607]: time="2025-11-05T15:59:35.795015479Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:59:35.818469 polkitd[1677]: Started polkitd version 126 Nov 5 15:59:35.830326 polkitd[1677]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 15:59:35.831134 polkitd[1677]: Loading rules from directory /run/polkit-1/rules.d Nov 5 15:59:35.831199 polkitd[1677]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 15:59:35.831401 polkitd[1677]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 15:59:35.831423 polkitd[1677]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 15:59:35.831459 polkitd[1677]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 15:59:35.833182 polkitd[1677]: Finished loading, compiling and executing 2 rules Nov 5 15:59:35.834014 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 15:59:35.836592 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 15:59:35.839192 polkitd[1677]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 15:59:35.863138 coreos-metadata[1653]: Nov 05 15:59:35.862 INFO Fetch successful Nov 5 15:59:35.878401 systemd-resolved[1303]: System hostname changed to '172-238-168-232'. Nov 5 15:59:35.878540 systemd-hostnamed[1652]: Hostname set to <172-238-168-232> (transient) Nov 5 15:59:35.900675 update-ssh-keys[1705]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:59:35.903038 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:59:35.912228 systemd[1]: Finished sshkeys.service. Nov 5 15:59:35.932757 containerd[1607]: time="2025-11-05T15:59:35.932451979Z" level=info msg="Start subscribing containerd event" Nov 5 15:59:35.932757 containerd[1607]: time="2025-11-05T15:59:35.932526369Z" level=info msg="Start recovering state" Nov 5 15:59:35.932757 containerd[1607]: time="2025-11-05T15:59:35.932579209Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:59:35.932757 containerd[1607]: time="2025-11-05T15:59:35.932674799Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:59:35.933196 containerd[1607]: time="2025-11-05T15:59:35.933126719Z" level=info msg="Start event monitor" Nov 5 15:59:35.933196 containerd[1607]: time="2025-11-05T15:59:35.933147859Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:59:35.933196 containerd[1607]: time="2025-11-05T15:59:35.933159629Z" level=info msg="Start streaming server" Nov 5 15:59:35.933612 containerd[1607]: time="2025-11-05T15:59:35.933173279Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:59:35.933612 containerd[1607]: time="2025-11-05T15:59:35.933293319Z" level=info msg="runtime interface starting up..." Nov 5 15:59:35.933612 containerd[1607]: time="2025-11-05T15:59:35.933303099Z" level=info msg="starting plugins..." Nov 5 15:59:35.933612 containerd[1607]: time="2025-11-05T15:59:35.933320859Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:59:35.934235 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:59:35.935017 containerd[1607]: time="2025-11-05T15:59:35.934119279Z" level=info msg="containerd successfully booted in 0.176833s" Nov 5 15:59:36.045561 tar[1604]: linux-amd64/README.md Nov 5 15:59:36.068280 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:59:36.132037 coreos-metadata[1574]: Nov 05 15:59:36.131 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 5 15:59:36.224890 coreos-metadata[1574]: Nov 05 15:59:36.224 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 5 15:59:36.294328 systemd-networkd[1516]: eth0: Gained IPv6LL Nov 5 15:59:36.297643 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:59:36.299533 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:59:36.303040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:59:36.307240 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:59:36.339077 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:59:36.424118 coreos-metadata[1574]: Nov 05 15:59:36.424 INFO Fetch successful Nov 5 15:59:36.424323 coreos-metadata[1574]: Nov 05 15:59:36.424 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 5 15:59:36.705146 coreos-metadata[1574]: Nov 05 15:59:36.705 INFO Fetch successful Nov 5 15:59:36.816590 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:59:36.818546 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:59:37.269632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:59:37.271672 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:59:37.273307 systemd[1]: Startup finished in 2.585s (kernel) + 5.876s (initrd) + 5.845s (userspace) = 14.307s. Nov 5 15:59:37.283322 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:59:37.826310 kubelet[1755]: E1105 15:59:37.826059 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:59:37.830430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:59:37.830639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:59:37.831085 systemd[1]: kubelet.service: Consumed 914ms CPU time, 266.2M memory peak. Nov 5 15:59:38.996983 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:59:38.998341 systemd[1]: Started sshd@0-172.238.168.232:22-139.178.89.65:60426.service - OpenSSH per-connection server daemon (139.178.89.65:60426). Nov 5 15:59:39.361681 sshd[1767]: Accepted publickey for core from 139.178.89.65 port 60426 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:59:39.363704 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:39.371085 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:59:39.372151 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:59:39.380455 systemd-logind[1592]: New session 1 of user core. Nov 5 15:59:39.393806 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:59:39.397758 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:59:39.411758 (systemd)[1772]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:59:39.414985 systemd-logind[1592]: New session c1 of user core. Nov 5 15:59:39.556282 systemd[1772]: Queued start job for default target default.target. Nov 5 15:59:39.567393 systemd[1772]: Created slice app.slice - User Application Slice. Nov 5 15:59:39.567429 systemd[1772]: Reached target paths.target - Paths. Nov 5 15:59:39.567477 systemd[1772]: Reached target timers.target - Timers. Nov 5 15:59:39.570900 systemd[1772]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:59:39.582707 systemd[1772]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:59:39.582823 systemd[1772]: Reached target sockets.target - Sockets. Nov 5 15:59:39.582861 systemd[1772]: Reached target basic.target - Basic System. Nov 5 15:59:39.582907 systemd[1772]: Reached target default.target - Main User Target. Nov 5 15:59:39.582971 systemd[1772]: Startup finished in 161ms. Nov 5 15:59:39.583512 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:59:39.595053 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:59:39.868413 systemd[1]: Started sshd@1-172.238.168.232:22-139.178.89.65:60436.service - OpenSSH per-connection server daemon (139.178.89.65:60436). Nov 5 15:59:40.222518 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 60436 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:59:40.224371 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:40.231354 systemd-logind[1592]: New session 2 of user core. Nov 5 15:59:40.240058 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:59:40.479443 sshd[1786]: Connection closed by 139.178.89.65 port 60436 Nov 5 15:59:40.480721 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:40.486414 systemd-logind[1592]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:59:40.486913 systemd[1]: sshd@1-172.238.168.232:22-139.178.89.65:60436.service: Deactivated successfully. Nov 5 15:59:40.490248 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:59:40.491816 systemd-logind[1592]: Removed session 2. Nov 5 15:59:40.537041 systemd[1]: Started sshd@2-172.238.168.232:22-139.178.89.65:60444.service - OpenSSH per-connection server daemon (139.178.89.65:60444). Nov 5 15:59:40.884452 sshd[1792]: Accepted publickey for core from 139.178.89.65 port 60444 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:59:40.886951 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:40.893865 systemd-logind[1592]: New session 3 of user core. Nov 5 15:59:40.905069 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:59:41.125546 sshd[1795]: Connection closed by 139.178.89.65 port 60444 Nov 5 15:59:41.126101 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:41.131641 systemd[1]: sshd@2-172.238.168.232:22-139.178.89.65:60444.service: Deactivated successfully. Nov 5 15:59:41.133760 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:59:41.135178 systemd-logind[1592]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:59:41.136569 systemd-logind[1592]: Removed session 3. Nov 5 15:59:41.191052 systemd[1]: Started sshd@3-172.238.168.232:22-139.178.89.65:60448.service - OpenSSH per-connection server daemon (139.178.89.65:60448). Nov 5 15:59:41.544315 sshd[1801]: Accepted publickey for core from 139.178.89.65 port 60448 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:59:41.546295 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:41.552894 systemd-logind[1592]: New session 4 of user core. Nov 5 15:59:41.562054 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:59:41.800404 sshd[1804]: Connection closed by 139.178.89.65 port 60448 Nov 5 15:59:41.801729 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:41.806651 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:59:41.807492 systemd[1]: sshd@3-172.238.168.232:22-139.178.89.65:60448.service: Deactivated successfully. Nov 5 15:59:41.810275 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:59:41.811978 systemd-logind[1592]: Removed session 4. Nov 5 15:59:41.872335 systemd[1]: Started sshd@4-172.238.168.232:22-139.178.89.65:60456.service - OpenSSH per-connection server daemon (139.178.89.65:60456). Nov 5 15:59:42.213280 sshd[1810]: Accepted publickey for core from 139.178.89.65 port 60456 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:59:42.216030 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:42.224123 systemd-logind[1592]: New session 5 of user core. Nov 5 15:59:42.230257 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:59:42.421565 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:59:42.422004 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:59:42.439710 sudo[1814]: pam_unix(sudo:session): session closed for user root Nov 5 15:59:42.491482 sshd[1813]: Connection closed by 139.178.89.65 port 60456 Nov 5 15:59:42.492374 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:42.499469 systemd[1]: sshd@4-172.238.168.232:22-139.178.89.65:60456.service: Deactivated successfully. Nov 5 15:59:42.502292 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:59:42.503143 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:59:42.504632 systemd-logind[1592]: Removed session 5. Nov 5 15:59:42.554184 systemd[1]: Started sshd@5-172.238.168.232:22-139.178.89.65:60460.service - OpenSSH per-connection server daemon (139.178.89.65:60460). Nov 5 15:59:42.907378 sshd[1820]: Accepted publickey for core from 139.178.89.65 port 60460 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:59:42.909620 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:42.915153 systemd-logind[1592]: New session 6 of user core. Nov 5 15:59:42.922054 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:59:43.105890 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:59:43.106256 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:59:43.112455 sudo[1825]: pam_unix(sudo:session): session closed for user root Nov 5 15:59:43.120166 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:59:43.120482 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:59:43.131527 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:59:43.175067 augenrules[1847]: No rules Nov 5 15:59:43.176587 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:59:43.176901 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:59:43.177849 sudo[1824]: pam_unix(sudo:session): session closed for user root Nov 5 15:59:43.228701 sshd[1823]: Connection closed by 139.178.89.65 port 60460 Nov 5 15:59:43.229414 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Nov 5 15:59:43.235508 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:59:43.236504 systemd[1]: sshd@5-172.238.168.232:22-139.178.89.65:60460.service: Deactivated successfully. Nov 5 15:59:43.239014 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:59:43.240766 systemd-logind[1592]: Removed session 6. Nov 5 15:59:43.294863 systemd[1]: Started sshd@6-172.238.168.232:22-139.178.89.65:60462.service - OpenSSH per-connection server daemon (139.178.89.65:60462). Nov 5 15:59:43.638048 sshd[1856]: Accepted publickey for core from 139.178.89.65 port 60462 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 15:59:43.639400 sshd-session[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:59:43.645146 systemd-logind[1592]: New session 7 of user core. Nov 5 15:59:43.657056 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:59:43.840409 sudo[1860]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:59:43.840737 sudo[1860]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:59:44.193314 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:59:44.207245 (dockerd)[1877]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:59:44.450476 dockerd[1877]: time="2025-11-05T15:59:44.450358529Z" level=info msg="Starting up" Nov 5 15:59:44.451772 dockerd[1877]: time="2025-11-05T15:59:44.451722939Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:59:44.468753 dockerd[1877]: time="2025-11-05T15:59:44.468698159Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:59:44.515282 dockerd[1877]: time="2025-11-05T15:59:44.515245869Z" level=info msg="Loading containers: start." Nov 5 15:59:44.528956 kernel: Initializing XFRM netlink socket Nov 5 15:59:44.821299 systemd-networkd[1516]: docker0: Link UP Nov 5 15:59:44.824727 dockerd[1877]: time="2025-11-05T15:59:44.824675139Z" level=info msg="Loading containers: done." Nov 5 15:59:44.840273 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1445407902-merged.mount: Deactivated successfully. Nov 5 15:59:44.841179 dockerd[1877]: time="2025-11-05T15:59:44.841102059Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:59:44.841292 dockerd[1877]: time="2025-11-05T15:59:44.841203079Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:59:44.841346 dockerd[1877]: time="2025-11-05T15:59:44.841319369Z" level=info msg="Initializing buildkit" Nov 5 15:59:44.868200 dockerd[1877]: time="2025-11-05T15:59:44.868170509Z" level=info msg="Completed buildkit initialization" Nov 5 15:59:44.875814 dockerd[1877]: time="2025-11-05T15:59:44.875755819Z" level=info msg="Daemon has completed initialization" Nov 5 15:59:44.875984 dockerd[1877]: time="2025-11-05T15:59:44.875945579Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:59:44.876015 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:59:45.736580 containerd[1607]: time="2025-11-05T15:59:45.736272489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 15:59:46.434319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542549956.mount: Deactivated successfully. Nov 5 15:59:47.651768 containerd[1607]: time="2025-11-05T15:59:47.651704079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:47.652691 containerd[1607]: time="2025-11-05T15:59:47.652557069Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 5 15:59:47.653250 containerd[1607]: time="2025-11-05T15:59:47.653219839Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:47.655778 containerd[1607]: time="2025-11-05T15:59:47.655754489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:47.657247 containerd[1607]: time="2025-11-05T15:59:47.656868859Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.92055789s" Nov 5 15:59:47.657247 containerd[1607]: time="2025-11-05T15:59:47.656897339Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 15:59:47.657771 containerd[1607]: time="2025-11-05T15:59:47.657748519Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 15:59:48.081190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:59:48.084163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:59:48.317133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:59:48.325743 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:59:48.387895 kubelet[2152]: E1105 15:59:48.387747 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:59:48.394167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:59:48.394371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:59:48.394987 systemd[1]: kubelet.service: Consumed 233ms CPU time, 108.4M memory peak. Nov 5 15:59:48.993941 containerd[1607]: time="2025-11-05T15:59:48.993858969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:48.995353 containerd[1607]: time="2025-11-05T15:59:48.995115859Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 5 15:59:48.996176 containerd[1607]: time="2025-11-05T15:59:48.995982589Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:48.999097 containerd[1607]: time="2025-11-05T15:59:48.999068279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:49.000184 containerd[1607]: time="2025-11-05T15:59:49.000153909Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.34238309s" Nov 5 15:59:49.000231 containerd[1607]: time="2025-11-05T15:59:49.000188439Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 15:59:49.000752 containerd[1607]: time="2025-11-05T15:59:49.000714719Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 15:59:50.254649 containerd[1607]: time="2025-11-05T15:59:50.254578239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:50.255793 containerd[1607]: time="2025-11-05T15:59:50.255756509Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 5 15:59:50.256440 containerd[1607]: time="2025-11-05T15:59:50.256386579Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:50.260449 containerd[1607]: time="2025-11-05T15:59:50.259629339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:50.262265 containerd[1607]: time="2025-11-05T15:59:50.262234799Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.26148981s" Nov 5 15:59:50.262347 containerd[1607]: time="2025-11-05T15:59:50.262333089Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 15:59:50.264202 containerd[1607]: time="2025-11-05T15:59:50.264150989Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 15:59:51.505700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount398857800.mount: Deactivated successfully. Nov 5 15:59:51.941084 containerd[1607]: time="2025-11-05T15:59:51.940363189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:51.941084 containerd[1607]: time="2025-11-05T15:59:51.940988609Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 5 15:59:51.941875 containerd[1607]: time="2025-11-05T15:59:51.941828879Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:51.943440 containerd[1607]: time="2025-11-05T15:59:51.943403829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:51.943990 containerd[1607]: time="2025-11-05T15:59:51.943966219Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.67977852s" Nov 5 15:59:51.944232 containerd[1607]: time="2025-11-05T15:59:51.944218059Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 15:59:51.945162 containerd[1607]: time="2025-11-05T15:59:51.945088349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 15:59:52.576314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834730701.mount: Deactivated successfully. Nov 5 15:59:53.405486 containerd[1607]: time="2025-11-05T15:59:53.405092899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:53.406558 containerd[1607]: time="2025-11-05T15:59:53.406530209Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 5 15:59:53.407378 containerd[1607]: time="2025-11-05T15:59:53.407321719Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:53.411165 containerd[1607]: time="2025-11-05T15:59:53.410312129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:53.411545 containerd[1607]: time="2025-11-05T15:59:53.411510109Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.46638951s" Nov 5 15:59:53.411603 containerd[1607]: time="2025-11-05T15:59:53.411546929Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 15:59:53.412678 containerd[1607]: time="2025-11-05T15:59:53.412654609Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:59:54.015015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77985503.mount: Deactivated successfully. Nov 5 15:59:54.020952 containerd[1607]: time="2025-11-05T15:59:54.020363279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:59:54.021315 containerd[1607]: time="2025-11-05T15:59:54.021277949Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:59:54.022017 containerd[1607]: time="2025-11-05T15:59:54.021957389Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:59:54.024940 containerd[1607]: time="2025-11-05T15:59:54.024014149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:59:54.024940 containerd[1607]: time="2025-11-05T15:59:54.024886109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 612.03337ms" Nov 5 15:59:54.025036 containerd[1607]: time="2025-11-05T15:59:54.024911369Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:59:54.025576 containerd[1607]: time="2025-11-05T15:59:54.025530299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 15:59:54.605980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990504452.mount: Deactivated successfully. Nov 5 15:59:56.501179 containerd[1607]: time="2025-11-05T15:59:56.501043359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:56.502644 containerd[1607]: time="2025-11-05T15:59:56.502456509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 5 15:59:56.503251 containerd[1607]: time="2025-11-05T15:59:56.503216399Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:56.506996 containerd[1607]: time="2025-11-05T15:59:56.506959229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:59:56.508143 containerd[1607]: time="2025-11-05T15:59:56.507885639Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.4823104s" Nov 5 15:59:56.508192 containerd[1607]: time="2025-11-05T15:59:56.508145839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 15:59:58.626007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:59:58.629103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:59:58.811503 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:59:58.811598 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:59:58.812111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:59:58.812511 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.3M memory peak. Nov 5 15:59:58.821455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:59:58.840230 systemd[1]: Reload requested from client PID 2319 ('systemctl') (unit session-7.scope)... Nov 5 15:59:58.840333 systemd[1]: Reloading... Nov 5 15:59:58.960944 zram_generator::config[2363]: No configuration found. Nov 5 15:59:59.212581 systemd[1]: Reloading finished in 371 ms. Nov 5 15:59:59.279506 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:59:59.279606 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:59:59.280194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:59:59.280246 systemd[1]: kubelet.service: Consumed 152ms CPU time, 98.4M memory peak. Nov 5 15:59:59.282081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:59:59.466437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:59:59.472621 (kubelet)[2417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:59:59.515971 kubelet[2417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:59:59.516257 kubelet[2417]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:59:59.516302 kubelet[2417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:59:59.516405 kubelet[2417]: I1105 15:59:59.516378 2417 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:59:59.988908 kubelet[2417]: I1105 15:59:59.988836 2417 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:59:59.988908 kubelet[2417]: I1105 15:59:59.988862 2417 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:59:59.989159 kubelet[2417]: I1105 15:59:59.989055 2417 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 16:00:00.024626 kubelet[2417]: I1105 16:00:00.023547 2417 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:00:00.024626 kubelet[2417]: E1105 16:00:00.024441 2417 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.168.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.168.232:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 16:00:00.035269 kubelet[2417]: I1105 16:00:00.035238 2417 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:00:00.043349 kubelet[2417]: I1105 16:00:00.043332 2417 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 16:00:00.043741 kubelet[2417]: I1105 16:00:00.043712 2417 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:00:00.044153 kubelet[2417]: I1105 16:00:00.043793 2417 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-168-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:00:00.044318 kubelet[2417]: I1105 16:00:00.044304 2417 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:00:00.044381 kubelet[2417]: I1105 16:00:00.044371 2417 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 16:00:00.045357 kubelet[2417]: I1105 16:00:00.045296 2417 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:00:00.048976 kubelet[2417]: I1105 16:00:00.048698 2417 kubelet.go:480] "Attempting to sync node with API server" Nov 5 16:00:00.048976 kubelet[2417]: I1105 16:00:00.048721 2417 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:00:00.048976 kubelet[2417]: I1105 16:00:00.048750 2417 kubelet.go:386] "Adding apiserver pod source" Nov 5 16:00:00.051364 kubelet[2417]: I1105 16:00:00.051348 2417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:00:00.056795 kubelet[2417]: E1105 16:00:00.056756 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.168.232:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-168-232&limit=500&resourceVersion=0\": dial tcp 172.238.168.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 16:00:00.058814 kubelet[2417]: I1105 16:00:00.057877 2417 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:00:00.058814 kubelet[2417]: I1105 16:00:00.058465 2417 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 16:00:00.059790 kubelet[2417]: W1105 16:00:00.059205 2417 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 16:00:00.062656 kubelet[2417]: I1105 16:00:00.062628 2417 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 16:00:00.062716 kubelet[2417]: I1105 16:00:00.062681 2417 server.go:1289] "Started kubelet" Nov 5 16:00:00.062751 kubelet[2417]: E1105 16:00:00.062737 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.168.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.168.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 16:00:00.066059 kubelet[2417]: I1105 16:00:00.066034 2417 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:00:00.067086 kubelet[2417]: I1105 16:00:00.067072 2417 server.go:317] "Adding debug handlers to kubelet server" Nov 5 16:00:00.067762 kubelet[2417]: I1105 16:00:00.067382 2417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:00:00.068141 kubelet[2417]: I1105 16:00:00.068108 2417 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:00:00.069936 kubelet[2417]: I1105 16:00:00.069898 2417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:00:00.073149 kubelet[2417]: E1105 16:00:00.071842 2417 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.168.232:6443/api/v1/namespaces/default/events\": dial tcp 172.238.168.232:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-168-232.187527a0a2a9091d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-168-232,UID:172-238-168-232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-168-232,},FirstTimestamp:2025-11-05 16:00:00.062654749 +0000 UTC m=+0.585244861,LastTimestamp:2025-11-05 16:00:00.062654749 +0000 UTC m=+0.585244861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-168-232,}" Nov 5 16:00:00.074981 kubelet[2417]: I1105 16:00:00.074530 2417 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 16:00:00.074981 kubelet[2417]: E1105 16:00:00.074830 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:00.076420 kubelet[2417]: I1105 16:00:00.076384 2417 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:00:00.077709 kubelet[2417]: I1105 16:00:00.077675 2417 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 16:00:00.077757 kubelet[2417]: I1105 16:00:00.077739 2417 reconciler.go:26] "Reconciler: start to sync state" Nov 5 16:00:00.081726 kubelet[2417]: E1105 16:00:00.078650 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.168.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-168-232?timeout=10s\": dial tcp 172.238.168.232:6443: connect: connection refused" interval="200ms" Nov 5 16:00:00.081726 kubelet[2417]: I1105 16:00:00.079405 2417 factory.go:223] Registration of the systemd container factory successfully Nov 5 16:00:00.081726 kubelet[2417]: I1105 16:00:00.079519 2417 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:00:00.084799 kubelet[2417]: I1105 16:00:00.084782 2417 factory.go:223] Registration of the containerd container factory successfully Nov 5 16:00:00.108277 kubelet[2417]: E1105 16:00:00.108240 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.168.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.168.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 16:00:00.111517 kubelet[2417]: I1105 16:00:00.111464 2417 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 16:00:00.117722 kubelet[2417]: I1105 16:00:00.117690 2417 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 16:00:00.117722 kubelet[2417]: I1105 16:00:00.117716 2417 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 16:00:00.117797 kubelet[2417]: I1105 16:00:00.117735 2417 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:00:00.117797 kubelet[2417]: I1105 16:00:00.117742 2417 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 16:00:00.117842 kubelet[2417]: E1105 16:00:00.117801 2417 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:00:00.119295 kubelet[2417]: E1105 16:00:00.119262 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.168.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.168.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 16:00:00.127739 kubelet[2417]: I1105 16:00:00.127706 2417 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:00:00.127739 kubelet[2417]: I1105 16:00:00.127727 2417 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:00:00.127739 kubelet[2417]: I1105 16:00:00.127743 2417 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:00:00.129438 kubelet[2417]: I1105 16:00:00.129421 2417 policy_none.go:49] "None policy: Start" Nov 5 16:00:00.129524 kubelet[2417]: I1105 16:00:00.129510 2417 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 16:00:00.129628 kubelet[2417]: I1105 16:00:00.129615 2417 state_mem.go:35] "Initializing new in-memory state store" Nov 5 16:00:00.137453 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 16:00:00.151173 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 16:00:00.155433 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 16:00:00.166032 kubelet[2417]: E1105 16:00:00.166012 2417 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 16:00:00.166261 kubelet[2417]: I1105 16:00:00.166247 2417 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:00:00.166339 kubelet[2417]: I1105 16:00:00.166312 2417 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:00:00.166589 kubelet[2417]: I1105 16:00:00.166577 2417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:00:00.169302 kubelet[2417]: E1105 16:00:00.169268 2417 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:00:00.169741 kubelet[2417]: E1105 16:00:00.169708 2417 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-168-232\" not found" Nov 5 16:00:00.230162 systemd[1]: Created slice kubepods-burstable-pod89086651a1b39c0e45442cba9385c816.slice - libcontainer container kubepods-burstable-pod89086651a1b39c0e45442cba9385c816.slice. Nov 5 16:00:00.242613 kubelet[2417]: E1105 16:00:00.241672 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:00.245371 systemd[1]: Created slice kubepods-burstable-pod9f18e3cb3df4bc2d357c860aaab67b50.slice - libcontainer container kubepods-burstable-pod9f18e3cb3df4bc2d357c860aaab67b50.slice. Nov 5 16:00:00.248152 kubelet[2417]: E1105 16:00:00.247968 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:00.251168 systemd[1]: Created slice kubepods-burstable-podab807a29589b05066b62fd000fce054e.slice - libcontainer container kubepods-burstable-podab807a29589b05066b62fd000fce054e.slice. Nov 5 16:00:00.253268 kubelet[2417]: E1105 16:00:00.253230 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:00.268092 kubelet[2417]: I1105 16:00:00.267997 2417 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-232" Nov 5 16:00:00.268530 kubelet[2417]: E1105 16:00:00.268494 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.168.232:6443/api/v1/nodes\": dial tcp 172.238.168.232:6443: connect: connection refused" node="172-238-168-232" Nov 5 16:00:00.278945 kubelet[2417]: E1105 16:00:00.278895 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.168.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-168-232?timeout=10s\": dial tcp 172.238.168.232:6443: connect: connection refused" interval="400ms" Nov 5 16:00:00.279081 kubelet[2417]: I1105 16:00:00.278892 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89086651a1b39c0e45442cba9385c816-ca-certs\") pod \"kube-apiserver-172-238-168-232\" (UID: \"89086651a1b39c0e45442cba9385c816\") " pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:00.279081 kubelet[2417]: I1105 16:00:00.279027 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89086651a1b39c0e45442cba9385c816-k8s-certs\") pod \"kube-apiserver-172-238-168-232\" (UID: \"89086651a1b39c0e45442cba9385c816\") " pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:00.279081 kubelet[2417]: I1105 16:00:00.279045 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89086651a1b39c0e45442cba9385c816-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-168-232\" (UID: \"89086651a1b39c0e45442cba9385c816\") " pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:00.380967 kubelet[2417]: I1105 16:00:00.379311 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-k8s-certs\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:00.380967 kubelet[2417]: I1105 16:00:00.379363 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:00.380967 kubelet[2417]: I1105 16:00:00.379393 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-ca-certs\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:00.380967 kubelet[2417]: I1105 16:00:00.379408 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-flexvolume-dir\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:00.380967 kubelet[2417]: I1105 16:00:00.379421 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-kubeconfig\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:00.381149 kubelet[2417]: I1105 16:00:00.379436 2417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab807a29589b05066b62fd000fce054e-kubeconfig\") pod \"kube-scheduler-172-238-168-232\" (UID: \"ab807a29589b05066b62fd000fce054e\") " pod="kube-system/kube-scheduler-172-238-168-232" Nov 5 16:00:00.471339 kubelet[2417]: I1105 16:00:00.471301 2417 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-232" Nov 5 16:00:00.471661 kubelet[2417]: E1105 16:00:00.471619 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.168.232:6443/api/v1/nodes\": dial tcp 172.238.168.232:6443: connect: connection refused" node="172-238-168-232" Nov 5 16:00:00.544407 kubelet[2417]: E1105 16:00:00.544055 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:00.545246 containerd[1607]: time="2025-11-05T16:00:00.544989579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-168-232,Uid:89086651a1b39c0e45442cba9385c816,Namespace:kube-system,Attempt:0,}" Nov 5 16:00:00.549452 kubelet[2417]: E1105 16:00:00.549429 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:00.549729 containerd[1607]: time="2025-11-05T16:00:00.549672229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-168-232,Uid:9f18e3cb3df4bc2d357c860aaab67b50,Namespace:kube-system,Attempt:0,}" Nov 5 16:00:00.554521 kubelet[2417]: E1105 16:00:00.554499 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:00.554871 containerd[1607]: time="2025-11-05T16:00:00.554844999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-168-232,Uid:ab807a29589b05066b62fd000fce054e,Namespace:kube-system,Attempt:0,}" Nov 5 16:00:00.569787 containerd[1607]: time="2025-11-05T16:00:00.569235949Z" level=info msg="connecting to shim 0f112f4a042cd9e27fc3177c167409f5924946e53b065baa322853c9c8d3e6dd" address="unix:///run/containerd/s/5cdd7e015c8366e9792aa09c3d66aee345dccc1aeee239c2a83b91c0e8a8d94a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:00:00.596661 containerd[1607]: time="2025-11-05T16:00:00.596624749Z" level=info msg="connecting to shim 5c01fd0b2e14174a2cb5e1635673ad2af25800618b0bdab56d22f027dbe2ea9e" address="unix:///run/containerd/s/515616c421e4c96f2e1aaeca981610aaf511da1a8cf60d15eb3ffadef233851e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:00:00.602750 containerd[1607]: time="2025-11-05T16:00:00.602668299Z" level=info msg="connecting to shim cc5589d3d50767cb8da9072c06e179a5d2005e6e5960495c6994b0ca9e2a665a" address="unix:///run/containerd/s/af10f5e0ca2f7bde09e7b4889d1558893eba86863cd1443119df232b0e8130a0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:00:00.628377 systemd[1]: Started cri-containerd-0f112f4a042cd9e27fc3177c167409f5924946e53b065baa322853c9c8d3e6dd.scope - libcontainer container 0f112f4a042cd9e27fc3177c167409f5924946e53b065baa322853c9c8d3e6dd. Nov 5 16:00:00.650414 systemd[1]: Started cri-containerd-cc5589d3d50767cb8da9072c06e179a5d2005e6e5960495c6994b0ca9e2a665a.scope - libcontainer container cc5589d3d50767cb8da9072c06e179a5d2005e6e5960495c6994b0ca9e2a665a. Nov 5 16:00:00.655859 systemd[1]: Started cri-containerd-5c01fd0b2e14174a2cb5e1635673ad2af25800618b0bdab56d22f027dbe2ea9e.scope - libcontainer container 5c01fd0b2e14174a2cb5e1635673ad2af25800618b0bdab56d22f027dbe2ea9e. Nov 5 16:00:00.680481 kubelet[2417]: E1105 16:00:00.680411 2417 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.168.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-168-232?timeout=10s\": dial tcp 172.238.168.232:6443: connect: connection refused" interval="800ms" Nov 5 16:00:00.732945 containerd[1607]: time="2025-11-05T16:00:00.732612289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-168-232,Uid:89086651a1b39c0e45442cba9385c816,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f112f4a042cd9e27fc3177c167409f5924946e53b065baa322853c9c8d3e6dd\"" Nov 5 16:00:00.735144 kubelet[2417]: E1105 16:00:00.735115 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:00.740490 containerd[1607]: time="2025-11-05T16:00:00.740402899Z" level=info msg="CreateContainer within sandbox \"0f112f4a042cd9e27fc3177c167409f5924946e53b065baa322853c9c8d3e6dd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 16:00:00.748198 containerd[1607]: time="2025-11-05T16:00:00.748149869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-168-232,Uid:9f18e3cb3df4bc2d357c860aaab67b50,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc5589d3d50767cb8da9072c06e179a5d2005e6e5960495c6994b0ca9e2a665a\"" Nov 5 16:00:00.748882 kubelet[2417]: E1105 16:00:00.748848 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:00.758301 containerd[1607]: time="2025-11-05T16:00:00.758262669Z" level=info msg="CreateContainer within sandbox \"cc5589d3d50767cb8da9072c06e179a5d2005e6e5960495c6994b0ca9e2a665a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 16:00:00.759478 containerd[1607]: time="2025-11-05T16:00:00.759453189Z" level=info msg="Container feda33b06e809f06e534db7088b62ad56df1fbe00b3198222a57931fefa29d67: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:00.767848 containerd[1607]: time="2025-11-05T16:00:00.767785859Z" level=info msg="Container 7ad2e2655ade8d86af52c7ccf32b88d7eb151a97acb81e0891f9797bb6fd686a: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:00.775877 containerd[1607]: time="2025-11-05T16:00:00.775828409Z" level=info msg="CreateContainer within sandbox \"cc5589d3d50767cb8da9072c06e179a5d2005e6e5960495c6994b0ca9e2a665a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7ad2e2655ade8d86af52c7ccf32b88d7eb151a97acb81e0891f9797bb6fd686a\"" Nov 5 16:00:00.777425 containerd[1607]: time="2025-11-05T16:00:00.777172679Z" level=info msg="CreateContainer within sandbox \"0f112f4a042cd9e27fc3177c167409f5924946e53b065baa322853c9c8d3e6dd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"feda33b06e809f06e534db7088b62ad56df1fbe00b3198222a57931fefa29d67\"" Nov 5 16:00:00.778584 containerd[1607]: time="2025-11-05T16:00:00.778555949Z" level=info msg="StartContainer for \"feda33b06e809f06e534db7088b62ad56df1fbe00b3198222a57931fefa29d67\"" Nov 5 16:00:00.779516 containerd[1607]: time="2025-11-05T16:00:00.779489639Z" level=info msg="StartContainer for \"7ad2e2655ade8d86af52c7ccf32b88d7eb151a97acb81e0891f9797bb6fd686a\"" Nov 5 16:00:00.784108 containerd[1607]: time="2025-11-05T16:00:00.783902599Z" level=info msg="connecting to shim feda33b06e809f06e534db7088b62ad56df1fbe00b3198222a57931fefa29d67" address="unix:///run/containerd/s/5cdd7e015c8366e9792aa09c3d66aee345dccc1aeee239c2a83b91c0e8a8d94a" protocol=ttrpc version=3 Nov 5 16:00:00.790290 containerd[1607]: time="2025-11-05T16:00:00.790261509Z" level=info msg="connecting to shim 7ad2e2655ade8d86af52c7ccf32b88d7eb151a97acb81e0891f9797bb6fd686a" address="unix:///run/containerd/s/af10f5e0ca2f7bde09e7b4889d1558893eba86863cd1443119df232b0e8130a0" protocol=ttrpc version=3 Nov 5 16:00:00.792937 containerd[1607]: time="2025-11-05T16:00:00.792328519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-168-232,Uid:ab807a29589b05066b62fd000fce054e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c01fd0b2e14174a2cb5e1635673ad2af25800618b0bdab56d22f027dbe2ea9e\"" Nov 5 16:00:00.794756 kubelet[2417]: E1105 16:00:00.794679 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:00.799299 containerd[1607]: time="2025-11-05T16:00:00.799263419Z" level=info msg="CreateContainer within sandbox \"5c01fd0b2e14174a2cb5e1635673ad2af25800618b0bdab56d22f027dbe2ea9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 16:00:00.807998 containerd[1607]: time="2025-11-05T16:00:00.807970289Z" level=info msg="Container 235ebdf906383fe7d9c28a57881ff1efb0bc4c1d963fb7dcf441a642f4b4ea84: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:00.816017 containerd[1607]: time="2025-11-05T16:00:00.815129479Z" level=info msg="CreateContainer within sandbox \"5c01fd0b2e14174a2cb5e1635673ad2af25800618b0bdab56d22f027dbe2ea9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"235ebdf906383fe7d9c28a57881ff1efb0bc4c1d963fb7dcf441a642f4b4ea84\"" Nov 5 16:00:00.816017 containerd[1607]: time="2025-11-05T16:00:00.815370519Z" level=info msg="StartContainer for \"235ebdf906383fe7d9c28a57881ff1efb0bc4c1d963fb7dcf441a642f4b4ea84\"" Nov 5 16:00:00.816783 containerd[1607]: time="2025-11-05T16:00:00.816740599Z" level=info msg="connecting to shim 235ebdf906383fe7d9c28a57881ff1efb0bc4c1d963fb7dcf441a642f4b4ea84" address="unix:///run/containerd/s/515616c421e4c96f2e1aaeca981610aaf511da1a8cf60d15eb3ffadef233851e" protocol=ttrpc version=3 Nov 5 16:00:00.827221 systemd[1]: Started cri-containerd-7ad2e2655ade8d86af52c7ccf32b88d7eb151a97acb81e0891f9797bb6fd686a.scope - libcontainer container 7ad2e2655ade8d86af52c7ccf32b88d7eb151a97acb81e0891f9797bb6fd686a. Nov 5 16:00:00.836556 systemd[1]: Started cri-containerd-feda33b06e809f06e534db7088b62ad56df1fbe00b3198222a57931fefa29d67.scope - libcontainer container feda33b06e809f06e534db7088b62ad56df1fbe00b3198222a57931fefa29d67. Nov 5 16:00:00.851060 systemd[1]: Started cri-containerd-235ebdf906383fe7d9c28a57881ff1efb0bc4c1d963fb7dcf441a642f4b4ea84.scope - libcontainer container 235ebdf906383fe7d9c28a57881ff1efb0bc4c1d963fb7dcf441a642f4b4ea84. Nov 5 16:00:00.876861 kubelet[2417]: I1105 16:00:00.876840 2417 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-232" Nov 5 16:00:00.877488 kubelet[2417]: E1105 16:00:00.877459 2417 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.168.232:6443/api/v1/nodes\": dial tcp 172.238.168.232:6443: connect: connection refused" node="172-238-168-232" Nov 5 16:00:00.889093 kubelet[2417]: E1105 16:00:00.889072 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.168.232:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-168-232&limit=500&resourceVersion=0\": dial tcp 172.238.168.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 16:00:00.959209 containerd[1607]: time="2025-11-05T16:00:00.959081439Z" level=info msg="StartContainer for \"235ebdf906383fe7d9c28a57881ff1efb0bc4c1d963fb7dcf441a642f4b4ea84\" returns successfully" Nov 5 16:00:00.964275 containerd[1607]: time="2025-11-05T16:00:00.964250909Z" level=info msg="StartContainer for \"feda33b06e809f06e534db7088b62ad56df1fbe00b3198222a57931fefa29d67\" returns successfully" Nov 5 16:00:00.968756 containerd[1607]: time="2025-11-05T16:00:00.968719039Z" level=info msg="StartContainer for \"7ad2e2655ade8d86af52c7ccf32b88d7eb151a97acb81e0891f9797bb6fd686a\" returns successfully" Nov 5 16:00:01.041189 kubelet[2417]: E1105 16:00:01.040909 2417 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.168.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.168.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 16:00:01.129594 kubelet[2417]: E1105 16:00:01.129566 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:01.129726 kubelet[2417]: E1105 16:00:01.129701 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:01.136846 kubelet[2417]: E1105 16:00:01.136821 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:01.137004 kubelet[2417]: E1105 16:00:01.136981 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:01.138459 kubelet[2417]: E1105 16:00:01.138436 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:01.138635 kubelet[2417]: E1105 16:00:01.138613 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:01.565244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646220119.mount: Deactivated successfully. Nov 5 16:00:01.681499 kubelet[2417]: I1105 16:00:01.681461 2417 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-232" Nov 5 16:00:02.141781 kubelet[2417]: E1105 16:00:02.141745 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:02.142112 kubelet[2417]: E1105 16:00:02.141881 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:02.142430 kubelet[2417]: E1105 16:00:02.142407 2417 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:02.142539 kubelet[2417]: E1105 16:00:02.142515 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:02.689440 kubelet[2417]: E1105 16:00:02.689401 2417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-238-168-232\" not found" node="172-238-168-232" Nov 5 16:00:02.833865 kubelet[2417]: I1105 16:00:02.833831 2417 kubelet_node_status.go:78] "Successfully registered node" node="172-238-168-232" Nov 5 16:00:02.834006 kubelet[2417]: E1105 16:00:02.833889 2417 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-238-168-232\": node \"172-238-168-232\" not found" Nov 5 16:00:02.877482 kubelet[2417]: E1105 16:00:02.877455 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:02.978103 kubelet[2417]: E1105 16:00:02.977965 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.079028 kubelet[2417]: E1105 16:00:03.078962 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.179348 kubelet[2417]: E1105 16:00:03.179306 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.279442 kubelet[2417]: E1105 16:00:03.279368 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.379580 kubelet[2417]: E1105 16:00:03.379526 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.480472 kubelet[2417]: E1105 16:00:03.480425 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.580691 kubelet[2417]: E1105 16:00:03.580520 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.681989 kubelet[2417]: E1105 16:00:03.681905 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.783020 kubelet[2417]: E1105 16:00:03.782955 2417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-168-232\" not found" Nov 5 16:00:03.876110 kubelet[2417]: I1105 16:00:03.875894 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-168-232" Nov 5 16:00:03.886890 kubelet[2417]: I1105 16:00:03.886668 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:03.894018 kubelet[2417]: I1105 16:00:03.892503 2417 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:04.060053 kubelet[2417]: I1105 16:00:04.060007 2417 apiserver.go:52] "Watching apiserver" Nov 5 16:00:04.063709 kubelet[2417]: E1105 16:00:04.063681 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:04.063790 kubelet[2417]: E1105 16:00:04.063769 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:04.063829 kubelet[2417]: E1105 16:00:04.063383 2417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:04.078398 kubelet[2417]: I1105 16:00:04.078324 2417 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 16:00:04.932875 systemd[1]: Reload requested from client PID 2696 ('systemctl') (unit session-7.scope)... Nov 5 16:00:04.932897 systemd[1]: Reloading... Nov 5 16:00:05.066994 zram_generator::config[2740]: No configuration found. Nov 5 16:00:05.409192 systemd[1]: Reloading finished in 475 ms. Nov 5 16:00:05.437915 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:00:05.463616 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 16:00:05.463982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:00:05.464049 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 132M memory peak. Nov 5 16:00:05.466971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 16:00:05.672349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 16:00:05.684545 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 16:00:05.741486 kubelet[2792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:00:05.741486 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 16:00:05.741486 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 16:00:05.742049 kubelet[2792]: I1105 16:00:05.741530 2792 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 16:00:05.751265 kubelet[2792]: I1105 16:00:05.751238 2792 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 16:00:05.751265 kubelet[2792]: I1105 16:00:05.751261 2792 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 16:00:05.751562 kubelet[2792]: I1105 16:00:05.751535 2792 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 16:00:05.754697 kubelet[2792]: I1105 16:00:05.754636 2792 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 16:00:05.758278 kubelet[2792]: I1105 16:00:05.757575 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 16:00:05.762870 kubelet[2792]: I1105 16:00:05.762793 2792 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 16:00:05.768427 kubelet[2792]: I1105 16:00:05.768385 2792 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 16:00:05.768685 kubelet[2792]: I1105 16:00:05.768638 2792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 16:00:05.768818 kubelet[2792]: I1105 16:00:05.768675 2792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-168-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 16:00:05.768818 kubelet[2792]: I1105 16:00:05.768814 2792 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 16:00:05.769001 kubelet[2792]: I1105 16:00:05.768825 2792 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 16:00:05.769001 kubelet[2792]: I1105 16:00:05.768867 2792 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:00:05.769105 kubelet[2792]: I1105 16:00:05.769083 2792 kubelet.go:480] "Attempting to sync node with API server" Nov 5 16:00:05.769697 kubelet[2792]: I1105 16:00:05.769665 2792 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 16:00:05.769739 kubelet[2792]: I1105 16:00:05.769703 2792 kubelet.go:386] "Adding apiserver pod source" Nov 5 16:00:05.769739 kubelet[2792]: I1105 16:00:05.769716 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 16:00:05.777786 kubelet[2792]: I1105 16:00:05.775643 2792 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 16:00:05.777786 kubelet[2792]: I1105 16:00:05.776490 2792 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 16:00:05.783811 kubelet[2792]: I1105 16:00:05.783794 2792 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 16:00:05.783910 kubelet[2792]: I1105 16:00:05.783900 2792 server.go:1289] "Started kubelet" Nov 5 16:00:05.784630 kubelet[2792]: I1105 16:00:05.784596 2792 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 16:00:05.784952 kubelet[2792]: I1105 16:00:05.784799 2792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 16:00:05.792599 kubelet[2792]: I1105 16:00:05.792578 2792 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 16:00:05.796538 kubelet[2792]: I1105 16:00:05.793001 2792 server.go:317] "Adding debug handlers to kubelet server" Nov 5 16:00:05.797415 kubelet[2792]: I1105 16:00:05.794214 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 16:00:05.801723 kubelet[2792]: I1105 16:00:05.794324 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 16:00:05.802438 kubelet[2792]: I1105 16:00:05.802411 2792 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 16:00:05.802794 kubelet[2792]: I1105 16:00:05.802769 2792 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 16:00:05.802960 kubelet[2792]: I1105 16:00:05.802891 2792 reconciler.go:26] "Reconciler: start to sync state" Nov 5 16:00:05.804094 kubelet[2792]: I1105 16:00:05.804066 2792 factory.go:223] Registration of the systemd container factory successfully Nov 5 16:00:05.806589 kubelet[2792]: I1105 16:00:05.806537 2792 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 16:00:05.808442 kubelet[2792]: E1105 16:00:05.808027 2792 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 16:00:05.810373 kubelet[2792]: I1105 16:00:05.810355 2792 factory.go:223] Registration of the containerd container factory successfully Nov 5 16:00:05.822252 kubelet[2792]: I1105 16:00:05.822209 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 16:00:05.823539 kubelet[2792]: I1105 16:00:05.823501 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 16:00:05.823539 kubelet[2792]: I1105 16:00:05.823527 2792 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 16:00:05.823622 kubelet[2792]: I1105 16:00:05.823546 2792 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 16:00:05.823622 kubelet[2792]: I1105 16:00:05.823553 2792 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 16:00:05.823622 kubelet[2792]: E1105 16:00:05.823598 2792 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 16:00:05.887664 kubelet[2792]: I1105 16:00:05.887626 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 16:00:05.887664 kubelet[2792]: I1105 16:00:05.887647 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 16:00:05.887664 kubelet[2792]: I1105 16:00:05.887674 2792 state_mem.go:36] "Initialized new in-memory state store" Nov 5 16:00:05.887829 kubelet[2792]: I1105 16:00:05.887784 2792 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 16:00:05.887829 kubelet[2792]: I1105 16:00:05.887794 2792 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 16:00:05.887829 kubelet[2792]: I1105 16:00:05.887808 2792 policy_none.go:49] "None policy: Start" Nov 5 16:00:05.887829 kubelet[2792]: I1105 16:00:05.887819 2792 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 16:00:05.887829 kubelet[2792]: I1105 16:00:05.887829 2792 state_mem.go:35] "Initializing new in-memory state store" Nov 5 16:00:05.887948 kubelet[2792]: I1105 16:00:05.887908 2792 state_mem.go:75] "Updated machine memory state" Nov 5 16:00:05.894168 kubelet[2792]: E1105 16:00:05.894135 2792 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 16:00:05.894329 kubelet[2792]: I1105 16:00:05.894301 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 16:00:05.894372 kubelet[2792]: I1105 16:00:05.894321 2792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 16:00:05.895316 kubelet[2792]: I1105 16:00:05.894755 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 16:00:05.901763 kubelet[2792]: E1105 16:00:05.901614 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 16:00:05.918725 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 16:00:05.927571 kubelet[2792]: I1105 16:00:05.926995 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-168-232" Nov 5 16:00:05.932652 kubelet[2792]: I1105 16:00:05.932117 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:05.933381 kubelet[2792]: I1105 16:00:05.933363 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:05.939484 kubelet[2792]: E1105 16:00:05.939183 2792 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-168-232\" already exists" pod="kube-system/kube-scheduler-172-238-168-232" Nov 5 16:00:05.941451 kubelet[2792]: E1105 16:00:05.941420 2792 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-168-232\" already exists" pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:05.941537 kubelet[2792]: E1105 16:00:05.941510 2792 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-168-232\" already exists" pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:05.953775 sudo[2833]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 5 16:00:05.955104 sudo[2833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 5 16:00:06.004449 kubelet[2792]: I1105 16:00:06.004406 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89086651a1b39c0e45442cba9385c816-ca-certs\") pod \"kube-apiserver-172-238-168-232\" (UID: \"89086651a1b39c0e45442cba9385c816\") " pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:06.004449 kubelet[2792]: I1105 16:00:06.004450 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89086651a1b39c0e45442cba9385c816-k8s-certs\") pod \"kube-apiserver-172-238-168-232\" (UID: \"89086651a1b39c0e45442cba9385c816\") " pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:06.004584 kubelet[2792]: I1105 16:00:06.004473 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89086651a1b39c0e45442cba9385c816-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-168-232\" (UID: \"89086651a1b39c0e45442cba9385c816\") " pod="kube-system/kube-apiserver-172-238-168-232" Nov 5 16:00:06.004584 kubelet[2792]: I1105 16:00:06.004498 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-ca-certs\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:06.004584 kubelet[2792]: I1105 16:00:06.004514 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:06.004584 kubelet[2792]: I1105 16:00:06.004534 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab807a29589b05066b62fd000fce054e-kubeconfig\") pod \"kube-scheduler-172-238-168-232\" (UID: \"ab807a29589b05066b62fd000fce054e\") " pod="kube-system/kube-scheduler-172-238-168-232" Nov 5 16:00:06.004584 kubelet[2792]: I1105 16:00:06.004547 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-flexvolume-dir\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:06.004693 kubelet[2792]: I1105 16:00:06.004564 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-k8s-certs\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:06.004693 kubelet[2792]: I1105 16:00:06.004577 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f18e3cb3df4bc2d357c860aaab67b50-kubeconfig\") pod \"kube-controller-manager-172-238-168-232\" (UID: \"9f18e3cb3df4bc2d357c860aaab67b50\") " pod="kube-system/kube-controller-manager-172-238-168-232" Nov 5 16:00:06.015996 kubelet[2792]: I1105 16:00:06.015960 2792 kubelet_node_status.go:75] "Attempting to register node" node="172-238-168-232" Nov 5 16:00:06.026723 kubelet[2792]: I1105 16:00:06.026687 2792 kubelet_node_status.go:124] "Node was previously registered" node="172-238-168-232" Nov 5 16:00:06.026897 kubelet[2792]: I1105 16:00:06.026786 2792 kubelet_node_status.go:78] "Successfully registered node" node="172-238-168-232" Nov 5 16:00:06.240967 kubelet[2792]: E1105 16:00:06.240551 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:06.242267 kubelet[2792]: E1105 16:00:06.242222 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:06.242595 kubelet[2792]: E1105 16:00:06.242578 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:06.327106 sudo[2833]: pam_unix(sudo:session): session closed for user root Nov 5 16:00:06.774650 kubelet[2792]: I1105 16:00:06.774567 2792 apiserver.go:52] "Watching apiserver" Nov 5 16:00:06.803832 kubelet[2792]: I1105 16:00:06.803786 2792 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 16:00:06.864733 kubelet[2792]: E1105 16:00:06.864684 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:06.865458 kubelet[2792]: E1105 16:00:06.865430 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:06.866310 kubelet[2792]: E1105 16:00:06.866249 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:06.880633 kubelet[2792]: I1105 16:00:06.880554 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-168-232" podStartSLOduration=3.880543049 podStartE2EDuration="3.880543049s" podCreationTimestamp="2025-11-05 16:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:00:06.879909599 +0000 UTC m=+1.189298451" watchObservedRunningTime="2025-11-05 16:00:06.880543049 +0000 UTC m=+1.189931901" Nov 5 16:00:06.920957 kubelet[2792]: I1105 16:00:06.920882 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-168-232" podStartSLOduration=3.920846899 podStartE2EDuration="3.920846899s" podCreationTimestamp="2025-11-05 16:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:00:06.912107239 +0000 UTC m=+1.221496091" watchObservedRunningTime="2025-11-05 16:00:06.920846899 +0000 UTC m=+1.230235751" Nov 5 16:00:06.931539 kubelet[2792]: I1105 16:00:06.931481 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-168-232" podStartSLOduration=3.931434749 podStartE2EDuration="3.931434749s" podCreationTimestamp="2025-11-05 16:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:00:06.921712559 +0000 UTC m=+1.231101431" watchObservedRunningTime="2025-11-05 16:00:06.931434749 +0000 UTC m=+1.240823601" Nov 5 16:00:07.744497 sudo[1860]: pam_unix(sudo:session): session closed for user root Nov 5 16:00:07.797386 sshd[1859]: Connection closed by 139.178.89.65 port 60462 Nov 5 16:00:07.797956 sshd-session[1856]: pam_unix(sshd:session): session closed for user core Nov 5 16:00:07.805668 systemd[1]: sshd@6-172.238.168.232:22-139.178.89.65:60462.service: Deactivated successfully. Nov 5 16:00:07.809121 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 16:00:07.809621 systemd[1]: session-7.scope: Consumed 4.134s CPU time, 273.3M memory peak. Nov 5 16:00:07.811641 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Nov 5 16:00:07.813370 systemd-logind[1592]: Removed session 7. Nov 5 16:00:07.866579 kubelet[2792]: E1105 16:00:07.866547 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:07.867812 kubelet[2792]: E1105 16:00:07.867531 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:07.868244 kubelet[2792]: E1105 16:00:07.867876 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:10.026990 kubelet[2792]: I1105 16:00:10.026855 2792 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 16:00:10.027852 containerd[1607]: time="2025-11-05T16:00:10.027639280Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 16:00:10.028262 kubelet[2792]: I1105 16:00:10.027852 2792 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 16:00:10.161890 kubelet[2792]: E1105 16:00:10.161836 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:10.873633 kubelet[2792]: E1105 16:00:10.873583 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.027356 systemd[1]: Created slice kubepods-besteffort-pod7ab6208f_8b53_4a43_bf0c_30709bc9e212.slice - libcontainer container kubepods-besteffort-pod7ab6208f_8b53_4a43_bf0c_30709bc9e212.slice. Nov 5 16:00:11.038507 kubelet[2792]: I1105 16:00:11.038480 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ab6208f-8b53-4a43-bf0c-30709bc9e212-kube-proxy\") pod \"kube-proxy-nfqs4\" (UID: \"7ab6208f-8b53-4a43-bf0c-30709bc9e212\") " pod="kube-system/kube-proxy-nfqs4" Nov 5 16:00:11.040009 kubelet[2792]: I1105 16:00:11.039076 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq892\" (UniqueName: \"kubernetes.io/projected/7ab6208f-8b53-4a43-bf0c-30709bc9e212-kube-api-access-dq892\") pod \"kube-proxy-nfqs4\" (UID: \"7ab6208f-8b53-4a43-bf0c-30709bc9e212\") " pod="kube-system/kube-proxy-nfqs4" Nov 5 16:00:11.040009 kubelet[2792]: I1105 16:00:11.039238 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ab6208f-8b53-4a43-bf0c-30709bc9e212-xtables-lock\") pod \"kube-proxy-nfqs4\" (UID: \"7ab6208f-8b53-4a43-bf0c-30709bc9e212\") " pod="kube-system/kube-proxy-nfqs4" Nov 5 16:00:11.040009 kubelet[2792]: I1105 16:00:11.039261 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ab6208f-8b53-4a43-bf0c-30709bc9e212-lib-modules\") pod \"kube-proxy-nfqs4\" (UID: \"7ab6208f-8b53-4a43-bf0c-30709bc9e212\") " pod="kube-system/kube-proxy-nfqs4" Nov 5 16:00:11.059979 systemd[1]: Created slice kubepods-burstable-podbf56dd2d_c28e_43a1_ad0d_389c305a2298.slice - libcontainer container kubepods-burstable-podbf56dd2d_c28e_43a1_ad0d_389c305a2298.slice. Nov 5 16:00:11.140712 kubelet[2792]: I1105 16:00:11.140656 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-cgroup\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.140712 kubelet[2792]: I1105 16:00:11.140715 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cni-path\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142315 kubelet[2792]: I1105 16:00:11.140733 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-lib-modules\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142315 kubelet[2792]: I1105 16:00:11.140761 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf56dd2d-c28e-43a1-ad0d-389c305a2298-hubble-tls\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142315 kubelet[2792]: I1105 16:00:11.140789 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-run\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142315 kubelet[2792]: I1105 16:00:11.140805 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-bpf-maps\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142315 kubelet[2792]: I1105 16:00:11.140822 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf56dd2d-c28e-43a1-ad0d-389c305a2298-clustermesh-secrets\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142315 kubelet[2792]: I1105 16:00:11.140853 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-config-path\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142480 kubelet[2792]: I1105 16:00:11.140874 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-987fb\" (UniqueName: \"kubernetes.io/projected/bf56dd2d-c28e-43a1-ad0d-389c305a2298-kube-api-access-987fb\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142480 kubelet[2792]: I1105 16:00:11.140958 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-hostproc\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142480 kubelet[2792]: I1105 16:00:11.140986 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-xtables-lock\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142480 kubelet[2792]: I1105 16:00:11.141034 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-host-proc-sys-net\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142480 kubelet[2792]: I1105 16:00:11.141068 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-etc-cni-netd\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.142480 kubelet[2792]: I1105 16:00:11.141085 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-host-proc-sys-kernel\") pod \"cilium-2vctm\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " pod="kube-system/cilium-2vctm" Nov 5 16:00:11.182240 kubelet[2792]: I1105 16:00:11.182196 2792 status_manager.go:895] "Failed to get status for pod" podUID="8d3ec580-976e-480e-b670-ca8f41be0ed4" pod="kube-system/cilium-operator-6c4d7847fc-sr9g7" err="pods \"cilium-operator-6c4d7847fc-sr9g7\" is forbidden: User \"system:node:172-238-168-232\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-238-168-232' and this object" Nov 5 16:00:11.198393 systemd[1]: Created slice kubepods-besteffort-pod8d3ec580_976e_480e_b670_ca8f41be0ed4.slice - libcontainer container kubepods-besteffort-pod8d3ec580_976e_480e_b670_ca8f41be0ed4.slice. Nov 5 16:00:11.242168 kubelet[2792]: I1105 16:00:11.242125 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d3ec580-976e-480e-b670-ca8f41be0ed4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sr9g7\" (UID: \"8d3ec580-976e-480e-b670-ca8f41be0ed4\") " pod="kube-system/cilium-operator-6c4d7847fc-sr9g7" Nov 5 16:00:11.242907 kubelet[2792]: I1105 16:00:11.242856 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrm77\" (UniqueName: \"kubernetes.io/projected/8d3ec580-976e-480e-b670-ca8f41be0ed4-kube-api-access-lrm77\") pod \"cilium-operator-6c4d7847fc-sr9g7\" (UID: \"8d3ec580-976e-480e-b670-ca8f41be0ed4\") " pod="kube-system/cilium-operator-6c4d7847fc-sr9g7" Nov 5 16:00:11.337639 kubelet[2792]: E1105 16:00:11.337267 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.338040 containerd[1607]: time="2025-11-05T16:00:11.337991981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfqs4,Uid:7ab6208f-8b53-4a43-bf0c-30709bc9e212,Namespace:kube-system,Attempt:0,}" Nov 5 16:00:11.365721 kubelet[2792]: E1105 16:00:11.365496 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.366454 containerd[1607]: time="2025-11-05T16:00:11.366424743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vctm,Uid:bf56dd2d-c28e-43a1-ad0d-389c305a2298,Namespace:kube-system,Attempt:0,}" Nov 5 16:00:11.370301 containerd[1607]: time="2025-11-05T16:00:11.370258586Z" level=info msg="connecting to shim f02003d7020610cf6c00225a59d48e53799ed6e9f05514bc9ea6be4f8fa68d2a" address="unix:///run/containerd/s/1c3757164a3425f32c428b1d1066eff0771acb7a1523589a437386598910bc68" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:00:11.387800 containerd[1607]: time="2025-11-05T16:00:11.387669159Z" level=info msg="connecting to shim 799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4" address="unix:///run/containerd/s/ad4f50cc736ea8f900ea39191102b331abf5e826de775aa765eb45fcc58b2be1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:00:11.407298 systemd[1]: Started cri-containerd-f02003d7020610cf6c00225a59d48e53799ed6e9f05514bc9ea6be4f8fa68d2a.scope - libcontainer container f02003d7020610cf6c00225a59d48e53799ed6e9f05514bc9ea6be4f8fa68d2a. Nov 5 16:00:11.427045 systemd[1]: Started cri-containerd-799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4.scope - libcontainer container 799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4. Nov 5 16:00:11.467641 containerd[1607]: time="2025-11-05T16:00:11.467549888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfqs4,Uid:7ab6208f-8b53-4a43-bf0c-30709bc9e212,Namespace:kube-system,Attempt:0,} returns sandbox id \"f02003d7020610cf6c00225a59d48e53799ed6e9f05514bc9ea6be4f8fa68d2a\"" Nov 5 16:00:11.469709 kubelet[2792]: E1105 16:00:11.468601 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.481942 containerd[1607]: time="2025-11-05T16:00:11.481797523Z" level=info msg="CreateContainer within sandbox \"f02003d7020610cf6c00225a59d48e53799ed6e9f05514bc9ea6be4f8fa68d2a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 16:00:11.484415 containerd[1607]: time="2025-11-05T16:00:11.484288196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2vctm,Uid:bf56dd2d-c28e-43a1-ad0d-389c305a2298,Namespace:kube-system,Attempt:0,} returns sandbox id \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\"" Nov 5 16:00:11.487886 kubelet[2792]: E1105 16:00:11.486509 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.491906 containerd[1607]: time="2025-11-05T16:00:11.491821474Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 5 16:00:11.500028 containerd[1607]: time="2025-11-05T16:00:11.500005778Z" level=info msg="Container c1d18b815a004c9c2f3e85f6b696bdea40e4691bfd70185d8e6da70ab8a1ccd5: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:11.503349 kubelet[2792]: E1105 16:00:11.503311 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.504155 containerd[1607]: time="2025-11-05T16:00:11.504082345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sr9g7,Uid:8d3ec580-976e-480e-b670-ca8f41be0ed4,Namespace:kube-system,Attempt:0,}" Nov 5 16:00:11.506497 containerd[1607]: time="2025-11-05T16:00:11.506423161Z" level=info msg="CreateContainer within sandbox \"f02003d7020610cf6c00225a59d48e53799ed6e9f05514bc9ea6be4f8fa68d2a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1d18b815a004c9c2f3e85f6b696bdea40e4691bfd70185d8e6da70ab8a1ccd5\"" Nov 5 16:00:11.507831 containerd[1607]: time="2025-11-05T16:00:11.507739901Z" level=info msg="StartContainer for \"c1d18b815a004c9c2f3e85f6b696bdea40e4691bfd70185d8e6da70ab8a1ccd5\"" Nov 5 16:00:11.510135 containerd[1607]: time="2025-11-05T16:00:11.510084248Z" level=info msg="connecting to shim c1d18b815a004c9c2f3e85f6b696bdea40e4691bfd70185d8e6da70ab8a1ccd5" address="unix:///run/containerd/s/1c3757164a3425f32c428b1d1066eff0771acb7a1523589a437386598910bc68" protocol=ttrpc version=3 Nov 5 16:00:11.522486 containerd[1607]: time="2025-11-05T16:00:11.522450116Z" level=info msg="connecting to shim 374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a" address="unix:///run/containerd/s/df7194eecf8853f08f66e39ce014162aaac2906875c0bc338446a808c78dabbd" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:00:11.542324 systemd[1]: Started cri-containerd-c1d18b815a004c9c2f3e85f6b696bdea40e4691bfd70185d8e6da70ab8a1ccd5.scope - libcontainer container c1d18b815a004c9c2f3e85f6b696bdea40e4691bfd70185d8e6da70ab8a1ccd5. Nov 5 16:00:11.562625 systemd[1]: Started cri-containerd-374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a.scope - libcontainer container 374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a. Nov 5 16:00:11.628984 containerd[1607]: time="2025-11-05T16:00:11.628870759Z" level=info msg="StartContainer for \"c1d18b815a004c9c2f3e85f6b696bdea40e4691bfd70185d8e6da70ab8a1ccd5\" returns successfully" Nov 5 16:00:11.659050 containerd[1607]: time="2025-11-05T16:00:11.658965153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sr9g7,Uid:8d3ec580-976e-480e-b670-ca8f41be0ed4,Namespace:kube-system,Attempt:0,} returns sandbox id \"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\"" Nov 5 16:00:11.662986 kubelet[2792]: E1105 16:00:11.662822 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.882999 kubelet[2792]: E1105 16:00:11.882955 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.884357 kubelet[2792]: E1105 16:00:11.884331 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:11.897035 kubelet[2792]: I1105 16:00:11.896965 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nfqs4" podStartSLOduration=0.896553716 podStartE2EDuration="896.553716ms" podCreationTimestamp="2025-11-05 16:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:00:11.894491703 +0000 UTC m=+6.203880555" watchObservedRunningTime="2025-11-05 16:00:11.896553716 +0000 UTC m=+6.205942578" Nov 5 16:00:12.350754 kubelet[2792]: E1105 16:00:12.350700 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:12.889756 kubelet[2792]: E1105 16:00:12.889705 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:16.061444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673548803.mount: Deactivated successfully. Nov 5 16:00:17.270850 kubelet[2792]: E1105 16:00:17.270814 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:17.955279 containerd[1607]: time="2025-11-05T16:00:17.954409121Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:00:17.955279 containerd[1607]: time="2025-11-05T16:00:17.955251128Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 5 16:00:17.955777 containerd[1607]: time="2025-11-05T16:00:17.955756270Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:00:17.957529 containerd[1607]: time="2025-11-05T16:00:17.957507133Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.46561748s" Nov 5 16:00:17.957607 containerd[1607]: time="2025-11-05T16:00:17.957592732Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 5 16:00:17.959617 containerd[1607]: time="2025-11-05T16:00:17.959550652Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 5 16:00:17.963737 containerd[1607]: time="2025-11-05T16:00:17.963683528Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 16:00:17.971137 containerd[1607]: time="2025-11-05T16:00:17.970461983Z" level=info msg="Container 189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:17.981957 containerd[1607]: time="2025-11-05T16:00:17.981902236Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\"" Nov 5 16:00:17.983667 containerd[1607]: time="2025-11-05T16:00:17.982561205Z" level=info msg="StartContainer for \"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\"" Nov 5 16:00:17.984587 containerd[1607]: time="2025-11-05T16:00:17.984553985Z" level=info msg="connecting to shim 189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5" address="unix:///run/containerd/s/ad4f50cc736ea8f900ea39191102b331abf5e826de775aa765eb45fcc58b2be1" protocol=ttrpc version=3 Nov 5 16:00:18.019114 systemd[1]: Started cri-containerd-189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5.scope - libcontainer container 189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5. Nov 5 16:00:18.059539 containerd[1607]: time="2025-11-05T16:00:18.059488041Z" level=info msg="StartContainer for \"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\" returns successfully" Nov 5 16:00:18.074335 systemd[1]: cri-containerd-189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5.scope: Deactivated successfully. Nov 5 16:00:18.076518 containerd[1607]: time="2025-11-05T16:00:18.076414865Z" level=info msg="received exit event container_id:\"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\" id:\"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\" pid:3217 exited_at:{seconds:1762358418 nanos:75116734}" Nov 5 16:00:18.077104 containerd[1607]: time="2025-11-05T16:00:18.077054826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\" id:\"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\" pid:3217 exited_at:{seconds:1762358418 nanos:75116734}" Nov 5 16:00:18.105197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5-rootfs.mount: Deactivated successfully. Nov 5 16:00:18.907890 kubelet[2792]: E1105 16:00:18.907690 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:18.916976 containerd[1607]: time="2025-11-05T16:00:18.916873168Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 16:00:18.928575 containerd[1607]: time="2025-11-05T16:00:18.928501449Z" level=info msg="Container fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:18.938864 containerd[1607]: time="2025-11-05T16:00:18.938773320Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\"" Nov 5 16:00:18.939592 containerd[1607]: time="2025-11-05T16:00:18.939563658Z" level=info msg="StartContainer for \"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\"" Nov 5 16:00:18.943281 containerd[1607]: time="2025-11-05T16:00:18.943258425Z" level=info msg="connecting to shim fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f" address="unix:///run/containerd/s/ad4f50cc736ea8f900ea39191102b331abf5e826de775aa765eb45fcc58b2be1" protocol=ttrpc version=3 Nov 5 16:00:18.967168 systemd[1]: Started cri-containerd-fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f.scope - libcontainer container fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f. Nov 5 16:00:19.034768 containerd[1607]: time="2025-11-05T16:00:19.034690468Z" level=info msg="StartContainer for \"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\" returns successfully" Nov 5 16:00:19.055353 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 16:00:19.055835 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:00:19.055983 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:00:19.060592 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 16:00:19.064017 systemd[1]: cri-containerd-fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f.scope: Deactivated successfully. Nov 5 16:00:19.070379 containerd[1607]: time="2025-11-05T16:00:19.070334123Z" level=info msg="received exit event container_id:\"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\" id:\"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\" pid:3267 exited_at:{seconds:1762358419 nanos:68178183}" Nov 5 16:00:19.072327 containerd[1607]: time="2025-11-05T16:00:19.072283177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\" id:\"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\" pid:3267 exited_at:{seconds:1762358419 nanos:68178183}" Nov 5 16:00:19.106975 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 16:00:19.123836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f-rootfs.mount: Deactivated successfully. Nov 5 16:00:19.560499 containerd[1607]: time="2025-11-05T16:00:19.560432965Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:00:19.561479 containerd[1607]: time="2025-11-05T16:00:19.561306873Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 5 16:00:19.562280 containerd[1607]: time="2025-11-05T16:00:19.562249741Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 16:00:19.563789 containerd[1607]: time="2025-11-05T16:00:19.563759180Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.604178909s" Nov 5 16:00:19.563841 containerd[1607]: time="2025-11-05T16:00:19.563814709Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 5 16:00:19.567346 containerd[1607]: time="2025-11-05T16:00:19.567317952Z" level=info msg="CreateContainer within sandbox \"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 5 16:00:19.579454 containerd[1607]: time="2025-11-05T16:00:19.579018992Z" level=info msg="Container 315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:19.583743 containerd[1607]: time="2025-11-05T16:00:19.583714208Z" level=info msg="CreateContainer within sandbox \"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\"" Nov 5 16:00:19.585410 containerd[1607]: time="2025-11-05T16:00:19.584371500Z" level=info msg="StartContainer for \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\"" Nov 5 16:00:19.586173 containerd[1607]: time="2025-11-05T16:00:19.586152365Z" level=info msg="connecting to shim 315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583" address="unix:///run/containerd/s/df7194eecf8853f08f66e39ce014162aaac2906875c0bc338446a808c78dabbd" protocol=ttrpc version=3 Nov 5 16:00:19.610065 systemd[1]: Started cri-containerd-315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583.scope - libcontainer container 315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583. Nov 5 16:00:19.644788 containerd[1607]: time="2025-11-05T16:00:19.644731198Z" level=info msg="StartContainer for \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" returns successfully" Nov 5 16:00:19.916570 kubelet[2792]: E1105 16:00:19.916385 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:19.923271 kubelet[2792]: E1105 16:00:19.922208 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:19.931038 containerd[1607]: time="2025-11-05T16:00:19.930898095Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 16:00:19.945784 containerd[1607]: time="2025-11-05T16:00:19.945753923Z" level=info msg="Container 8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:19.953757 containerd[1607]: time="2025-11-05T16:00:19.953680765Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\"" Nov 5 16:00:19.955349 containerd[1607]: time="2025-11-05T16:00:19.955296493Z" level=info msg="StartContainer for \"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\"" Nov 5 16:00:19.956630 containerd[1607]: time="2025-11-05T16:00:19.956565446Z" level=info msg="connecting to shim 8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab" address="unix:///run/containerd/s/ad4f50cc736ea8f900ea39191102b331abf5e826de775aa765eb45fcc58b2be1" protocol=ttrpc version=3 Nov 5 16:00:19.977657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958188551.mount: Deactivated successfully. Nov 5 16:00:20.002189 systemd[1]: Started cri-containerd-8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab.scope - libcontainer container 8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab. Nov 5 16:00:20.115255 containerd[1607]: time="2025-11-05T16:00:20.115209344Z" level=info msg="StartContainer for \"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\" returns successfully" Nov 5 16:00:20.130818 systemd[1]: cri-containerd-8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab.scope: Deactivated successfully. Nov 5 16:00:20.133810 containerd[1607]: time="2025-11-05T16:00:20.133759208Z" level=info msg="received exit event container_id:\"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\" id:\"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\" pid:3357 exited_at:{seconds:1762358420 nanos:133418422}" Nov 5 16:00:20.137333 containerd[1607]: time="2025-11-05T16:00:20.137292033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\" id:\"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\" pid:3357 exited_at:{seconds:1762358420 nanos:133418422}" Nov 5 16:00:20.178821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab-rootfs.mount: Deactivated successfully. Nov 5 16:00:20.931440 kubelet[2792]: E1105 16:00:20.930178 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:20.932516 kubelet[2792]: E1105 16:00:20.931572 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:20.935850 containerd[1607]: time="2025-11-05T16:00:20.935783838Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 16:00:20.952644 containerd[1607]: time="2025-11-05T16:00:20.950631238Z" level=info msg="Container c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:20.959370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816283040.mount: Deactivated successfully. Nov 5 16:00:20.960692 kubelet[2792]: I1105 16:00:20.960467 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sr9g7" podStartSLOduration=2.059894464 podStartE2EDuration="9.960451273s" podCreationTimestamp="2025-11-05 16:00:11 +0000 UTC" firstStartedPulling="2025-11-05 16:00:11.663792243 +0000 UTC m=+5.973181095" lastFinishedPulling="2025-11-05 16:00:19.564349052 +0000 UTC m=+13.873737904" observedRunningTime="2025-11-05 16:00:19.969009736 +0000 UTC m=+14.278398608" watchObservedRunningTime="2025-11-05 16:00:20.960451273 +0000 UTC m=+15.269840145" Nov 5 16:00:20.964971 containerd[1607]: time="2025-11-05T16:00:20.964902266Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\"" Nov 5 16:00:20.966095 containerd[1607]: time="2025-11-05T16:00:20.966052932Z" level=info msg="StartContainer for \"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\"" Nov 5 16:00:20.966538 update_engine[1597]: I20251105 16:00:20.966027 1597 update_attempter.cc:509] Updating boot flags... Nov 5 16:00:20.968632 containerd[1607]: time="2025-11-05T16:00:20.967750680Z" level=info msg="connecting to shim c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0" address="unix:///run/containerd/s/ad4f50cc736ea8f900ea39191102b331abf5e826de775aa765eb45fcc58b2be1" protocol=ttrpc version=3 Nov 5 16:00:21.007202 systemd[1]: Started cri-containerd-c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0.scope - libcontainer container c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0. Nov 5 16:00:21.105466 systemd[1]: cri-containerd-c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0.scope: Deactivated successfully. Nov 5 16:00:21.112333 containerd[1607]: time="2025-11-05T16:00:21.108847716Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf56dd2d_c28e_43a1_ad0d_389c305a2298.slice/cri-containerd-c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0.scope/memory.events\": no such file or directory" Nov 5 16:00:21.116649 containerd[1607]: time="2025-11-05T16:00:21.116608463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\" id:\"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\" pid:3406 exited_at:{seconds:1762358421 nanos:102908837}" Nov 5 16:00:21.123950 containerd[1607]: time="2025-11-05T16:00:21.123895226Z" level=info msg="received exit event container_id:\"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\" id:\"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\" pid:3406 exited_at:{seconds:1762358421 nanos:102908837}" Nov 5 16:00:21.164269 containerd[1607]: time="2025-11-05T16:00:21.164233394Z" level=info msg="StartContainer for \"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\" returns successfully" Nov 5 16:00:21.224717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0-rootfs.mount: Deactivated successfully. Nov 5 16:00:21.938227 kubelet[2792]: E1105 16:00:21.938161 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:21.946339 containerd[1607]: time="2025-11-05T16:00:21.946285832Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 16:00:21.964570 containerd[1607]: time="2025-11-05T16:00:21.962355750Z" level=info msg="Container 570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:21.969261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555890043.mount: Deactivated successfully. Nov 5 16:00:21.974535 containerd[1607]: time="2025-11-05T16:00:21.974472135Z" level=info msg="CreateContainer within sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\"" Nov 5 16:00:21.975395 containerd[1607]: time="2025-11-05T16:00:21.975359095Z" level=info msg="StartContainer for \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\"" Nov 5 16:00:21.977099 containerd[1607]: time="2025-11-05T16:00:21.977073084Z" level=info msg="connecting to shim 570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f" address="unix:///run/containerd/s/ad4f50cc736ea8f900ea39191102b331abf5e826de775aa765eb45fcc58b2be1" protocol=ttrpc version=3 Nov 5 16:00:22.019102 systemd[1]: Started cri-containerd-570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f.scope - libcontainer container 570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f. Nov 5 16:00:22.073398 containerd[1607]: time="2025-11-05T16:00:22.073358607Z" level=info msg="StartContainer for \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" returns successfully" Nov 5 16:00:22.170236 containerd[1607]: time="2025-11-05T16:00:22.170128612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" id:\"14c2af3e28cc497931d5e39fe3b7abc81b98a7ffdc7588c7e949283749947cc5\" pid:3484 exited_at:{seconds:1762358422 nanos:169642367}" Nov 5 16:00:22.204715 kubelet[2792]: I1105 16:00:22.204596 2792 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 16:00:22.252679 systemd[1]: Created slice kubepods-burstable-pod5421e3d1_251e_4c27_8435_04fc699c39d4.slice - libcontainer container kubepods-burstable-pod5421e3d1_251e_4c27_8435_04fc699c39d4.slice. Nov 5 16:00:22.261126 systemd[1]: Created slice kubepods-burstable-pod25c39d92_60e2_4a31_9e49_931fe725ba8d.slice - libcontainer container kubepods-burstable-pod25c39d92_60e2_4a31_9e49_931fe725ba8d.slice. Nov 5 16:00:22.336026 kubelet[2792]: I1105 16:00:22.335978 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xp55\" (UniqueName: \"kubernetes.io/projected/25c39d92-60e2-4a31-9e49-931fe725ba8d-kube-api-access-5xp55\") pod \"coredns-674b8bbfcf-xcmrg\" (UID: \"25c39d92-60e2-4a31-9e49-931fe725ba8d\") " pod="kube-system/coredns-674b8bbfcf-xcmrg" Nov 5 16:00:22.336351 kubelet[2792]: I1105 16:00:22.336284 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5421e3d1-251e-4c27-8435-04fc699c39d4-config-volume\") pod \"coredns-674b8bbfcf-tr8zm\" (UID: \"5421e3d1-251e-4c27-8435-04fc699c39d4\") " pod="kube-system/coredns-674b8bbfcf-tr8zm" Nov 5 16:00:22.336382 kubelet[2792]: I1105 16:00:22.336308 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6zg\" (UniqueName: \"kubernetes.io/projected/5421e3d1-251e-4c27-8435-04fc699c39d4-kube-api-access-jk6zg\") pod \"coredns-674b8bbfcf-tr8zm\" (UID: \"5421e3d1-251e-4c27-8435-04fc699c39d4\") " pod="kube-system/coredns-674b8bbfcf-tr8zm" Nov 5 16:00:22.336407 kubelet[2792]: I1105 16:00:22.336394 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25c39d92-60e2-4a31-9e49-931fe725ba8d-config-volume\") pod \"coredns-674b8bbfcf-xcmrg\" (UID: \"25c39d92-60e2-4a31-9e49-931fe725ba8d\") " pod="kube-system/coredns-674b8bbfcf-xcmrg" Nov 5 16:00:22.559065 kubelet[2792]: E1105 16:00:22.558891 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:22.560812 containerd[1607]: time="2025-11-05T16:00:22.560196509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tr8zm,Uid:5421e3d1-251e-4c27-8435-04fc699c39d4,Namespace:kube-system,Attempt:0,}" Nov 5 16:00:22.566640 kubelet[2792]: E1105 16:00:22.566619 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:22.567879 containerd[1607]: time="2025-11-05T16:00:22.567846653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xcmrg,Uid:25c39d92-60e2-4a31-9e49-931fe725ba8d,Namespace:kube-system,Attempt:0,}" Nov 5 16:00:22.945415 kubelet[2792]: E1105 16:00:22.945371 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:22.959895 kubelet[2792]: I1105 16:00:22.959852 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2vctm" podStartSLOduration=5.491152868 podStartE2EDuration="11.959838628s" podCreationTimestamp="2025-11-05 16:00:11 +0000 UTC" firstStartedPulling="2025-11-05 16:00:11.490101863 +0000 UTC m=+5.799490715" lastFinishedPulling="2025-11-05 16:00:17.958787593 +0000 UTC m=+12.268176475" observedRunningTime="2025-11-05 16:00:22.958547933 +0000 UTC m=+17.267936805" watchObservedRunningTime="2025-11-05 16:00:22.959838628 +0000 UTC m=+17.269227480" Nov 5 16:00:23.946594 kubelet[2792]: E1105 16:00:23.946545 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:24.469746 systemd-networkd[1516]: cilium_host: Link UP Nov 5 16:00:24.472401 systemd-networkd[1516]: cilium_net: Link UP Nov 5 16:00:24.473153 systemd-networkd[1516]: cilium_net: Gained carrier Nov 5 16:00:24.473548 systemd-networkd[1516]: cilium_host: Gained carrier Nov 5 16:00:24.611157 systemd-networkd[1516]: cilium_vxlan: Link UP Nov 5 16:00:24.611445 systemd-networkd[1516]: cilium_vxlan: Gained carrier Nov 5 16:00:24.654350 systemd-networkd[1516]: cilium_host: Gained IPv6LL Nov 5 16:00:24.686093 systemd-networkd[1516]: cilium_net: Gained IPv6LL Nov 5 16:00:24.843118 kernel: NET: Registered PF_ALG protocol family Nov 5 16:00:24.948070 kubelet[2792]: E1105 16:00:24.948033 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:25.556368 systemd-networkd[1516]: lxc_health: Link UP Nov 5 16:00:25.556736 systemd-networkd[1516]: lxc_health: Gained carrier Nov 5 16:00:25.952907 kubelet[2792]: E1105 16:00:25.952872 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:26.124682 kernel: eth0: renamed from tmp69e79 Nov 5 16:00:26.124666 systemd-networkd[1516]: lxca1c444d50a39: Link UP Nov 5 16:00:26.129455 systemd-networkd[1516]: lxca1c444d50a39: Gained carrier Nov 5 16:00:26.141142 systemd-networkd[1516]: lxc85d42902cbc6: Link UP Nov 5 16:00:26.147940 kernel: eth0: renamed from tmp245ea Nov 5 16:00:26.151436 systemd-networkd[1516]: lxc85d42902cbc6: Gained carrier Nov 5 16:00:26.407170 systemd-networkd[1516]: cilium_vxlan: Gained IPv6LL Nov 5 16:00:27.366365 systemd-networkd[1516]: lxc_health: Gained IPv6LL Nov 5 16:00:27.376325 kubelet[2792]: E1105 16:00:27.376287 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:27.955191 kubelet[2792]: E1105 16:00:27.954629 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:28.007106 systemd-networkd[1516]: lxc85d42902cbc6: Gained IPv6LL Nov 5 16:00:28.134981 systemd-networkd[1516]: lxca1c444d50a39: Gained IPv6LL Nov 5 16:00:29.619625 containerd[1607]: time="2025-11-05T16:00:29.619263565Z" level=info msg="connecting to shim 69e790ab2ce4396d84bdb21fbecde3caf71b3b10e4400cf97450bfd6d924bf63" address="unix:///run/containerd/s/36b3beef07feab0e390390f4219abf57adee2875a9dd723bbe8fd50d7d8d110f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:00:29.683046 systemd[1]: Started cri-containerd-69e790ab2ce4396d84bdb21fbecde3caf71b3b10e4400cf97450bfd6d924bf63.scope - libcontainer container 69e790ab2ce4396d84bdb21fbecde3caf71b3b10e4400cf97450bfd6d924bf63. Nov 5 16:00:29.688534 containerd[1607]: time="2025-11-05T16:00:29.688063944Z" level=info msg="connecting to shim 245ea2cdf8e4c9bd0986816b1337f9e3d7627916015a412872ce4ed2dfe46c47" address="unix:///run/containerd/s/eeae854a6fa71aed59d19071c06b3cdedcb9b4732b451c500c8b77a374f0231b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:00:29.736073 systemd[1]: Started cri-containerd-245ea2cdf8e4c9bd0986816b1337f9e3d7627916015a412872ce4ed2dfe46c47.scope - libcontainer container 245ea2cdf8e4c9bd0986816b1337f9e3d7627916015a412872ce4ed2dfe46c47. Nov 5 16:00:29.841844 containerd[1607]: time="2025-11-05T16:00:29.841578599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xcmrg,Uid:25c39d92-60e2-4a31-9e49-931fe725ba8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"69e790ab2ce4396d84bdb21fbecde3caf71b3b10e4400cf97450bfd6d924bf63\"" Nov 5 16:00:29.843430 kubelet[2792]: E1105 16:00:29.843389 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:29.853250 containerd[1607]: time="2025-11-05T16:00:29.853165036Z" level=info msg="CreateContainer within sandbox \"69e790ab2ce4396d84bdb21fbecde3caf71b3b10e4400cf97450bfd6d924bf63\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:00:29.854416 containerd[1607]: time="2025-11-05T16:00:29.854346838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tr8zm,Uid:5421e3d1-251e-4c27-8435-04fc699c39d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"245ea2cdf8e4c9bd0986816b1337f9e3d7627916015a412872ce4ed2dfe46c47\"" Nov 5 16:00:29.857000 kubelet[2792]: E1105 16:00:29.856978 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:29.864524 containerd[1607]: time="2025-11-05T16:00:29.864446195Z" level=info msg="CreateContainer within sandbox \"245ea2cdf8e4c9bd0986816b1337f9e3d7627916015a412872ce4ed2dfe46c47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 16:00:29.883443 containerd[1607]: time="2025-11-05T16:00:29.883102582Z" level=info msg="Container 45b70b3f9cff982156aaeb0026af2f1e25c30235f3e9a90668557152c3aa5fe8: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:29.885669 containerd[1607]: time="2025-11-05T16:00:29.884811500Z" level=info msg="Container 3d3ec078f34a7a98aea504c26aaa1d16c314bd5fab63bdf7156e212efe666ed4: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:00:29.893474 containerd[1607]: time="2025-11-05T16:00:29.893446209Z" level=info msg="CreateContainer within sandbox \"69e790ab2ce4396d84bdb21fbecde3caf71b3b10e4400cf97450bfd6d924bf63\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45b70b3f9cff982156aaeb0026af2f1e25c30235f3e9a90668557152c3aa5fe8\"" Nov 5 16:00:29.894420 containerd[1607]: time="2025-11-05T16:00:29.894387302Z" level=info msg="StartContainer for \"45b70b3f9cff982156aaeb0026af2f1e25c30235f3e9a90668557152c3aa5fe8\"" Nov 5 16:00:29.895806 containerd[1607]: time="2025-11-05T16:00:29.895758712Z" level=info msg="connecting to shim 45b70b3f9cff982156aaeb0026af2f1e25c30235f3e9a90668557152c3aa5fe8" address="unix:///run/containerd/s/36b3beef07feab0e390390f4219abf57adee2875a9dd723bbe8fd50d7d8d110f" protocol=ttrpc version=3 Nov 5 16:00:29.896124 containerd[1607]: time="2025-11-05T16:00:29.895877361Z" level=info msg="CreateContainer within sandbox \"245ea2cdf8e4c9bd0986816b1337f9e3d7627916015a412872ce4ed2dfe46c47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d3ec078f34a7a98aea504c26aaa1d16c314bd5fab63bdf7156e212efe666ed4\"" Nov 5 16:00:29.898387 containerd[1607]: time="2025-11-05T16:00:29.898336504Z" level=info msg="StartContainer for \"3d3ec078f34a7a98aea504c26aaa1d16c314bd5fab63bdf7156e212efe666ed4\"" Nov 5 16:00:29.902756 containerd[1607]: time="2025-11-05T16:00:29.902455254Z" level=info msg="connecting to shim 3d3ec078f34a7a98aea504c26aaa1d16c314bd5fab63bdf7156e212efe666ed4" address="unix:///run/containerd/s/eeae854a6fa71aed59d19071c06b3cdedcb9b4732b451c500c8b77a374f0231b" protocol=ttrpc version=3 Nov 5 16:00:29.933354 systemd[1]: Started cri-containerd-45b70b3f9cff982156aaeb0026af2f1e25c30235f3e9a90668557152c3aa5fe8.scope - libcontainer container 45b70b3f9cff982156aaeb0026af2f1e25c30235f3e9a90668557152c3aa5fe8. Nov 5 16:00:29.947281 systemd[1]: Started cri-containerd-3d3ec078f34a7a98aea504c26aaa1d16c314bd5fab63bdf7156e212efe666ed4.scope - libcontainer container 3d3ec078f34a7a98aea504c26aaa1d16c314bd5fab63bdf7156e212efe666ed4. Nov 5 16:00:30.003485 containerd[1607]: time="2025-11-05T16:00:30.003399885Z" level=info msg="StartContainer for \"45b70b3f9cff982156aaeb0026af2f1e25c30235f3e9a90668557152c3aa5fe8\" returns successfully" Nov 5 16:00:30.013266 containerd[1607]: time="2025-11-05T16:00:30.013228049Z" level=info msg="StartContainer for \"3d3ec078f34a7a98aea504c26aaa1d16c314bd5fab63bdf7156e212efe666ed4\" returns successfully" Nov 5 16:00:30.603293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount266026787.mount: Deactivated successfully. Nov 5 16:00:30.977199 kubelet[2792]: E1105 16:00:30.976414 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:30.981982 kubelet[2792]: E1105 16:00:30.981659 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:30.994967 kubelet[2792]: I1105 16:00:30.994221 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tr8zm" podStartSLOduration=19.994195047 podStartE2EDuration="19.994195047s" podCreationTimestamp="2025-11-05 16:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:00:30.9907875 +0000 UTC m=+25.300176362" watchObservedRunningTime="2025-11-05 16:00:30.994195047 +0000 UTC m=+25.303583919" Nov 5 16:00:31.031189 kubelet[2792]: I1105 16:00:31.030861 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xcmrg" podStartSLOduration=20.030844505 podStartE2EDuration="20.030844505s" podCreationTimestamp="2025-11-05 16:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:00:31.029244385 +0000 UTC m=+25.338633257" watchObservedRunningTime="2025-11-05 16:00:31.030844505 +0000 UTC m=+25.340233357" Nov 5 16:00:31.983975 kubelet[2792]: E1105 16:00:31.983764 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:31.985747 kubelet[2792]: E1105 16:00:31.985108 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:32.985869 kubelet[2792]: E1105 16:00:32.985505 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:00:32.986813 kubelet[2792]: E1105 16:00:32.986691 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:01:12.824836 kubelet[2792]: E1105 16:01:12.824724 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:01:14.825053 kubelet[2792]: E1105 16:01:14.824982 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:01:20.824242 kubelet[2792]: E1105 16:01:20.824204 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:01:30.824454 kubelet[2792]: E1105 16:01:30.824188 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:01:38.824539 kubelet[2792]: E1105 16:01:38.824500 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:01:49.825877 kubelet[2792]: E1105 16:01:49.825059 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:01:55.826056 kubelet[2792]: E1105 16:01:55.825399 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:01:59.824667 kubelet[2792]: E1105 16:01:59.824292 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:02:18.825535 kubelet[2792]: E1105 16:02:18.825448 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:02:22.825375 kubelet[2792]: E1105 16:02:22.825302 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:02:37.833186 kubelet[2792]: E1105 16:02:37.832056 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:02:38.824701 kubelet[2792]: E1105 16:02:38.824654 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:02:44.824539 kubelet[2792]: E1105 16:02:44.824489 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:03:06.825051 kubelet[2792]: E1105 16:03:06.824913 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:03:08.825009 kubelet[2792]: E1105 16:03:08.824871 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:03:17.726564 systemd[1]: Started sshd@7-172.238.168.232:22-139.178.89.65:57442.service - OpenSSH per-connection server daemon (139.178.89.65:57442). Nov 5 16:03:18.073679 sshd[4133]: Accepted publickey for core from 139.178.89.65 port 57442 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:18.076041 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:18.082842 systemd-logind[1592]: New session 8 of user core. Nov 5 16:03:18.091030 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 16:03:18.416177 sshd[4136]: Connection closed by 139.178.89.65 port 57442 Nov 5 16:03:18.417194 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:18.422514 systemd[1]: sshd@7-172.238.168.232:22-139.178.89.65:57442.service: Deactivated successfully. Nov 5 16:03:18.425413 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 16:03:18.427431 systemd-logind[1592]: Session 8 logged out. Waiting for processes to exit. Nov 5 16:03:18.429570 systemd-logind[1592]: Removed session 8. Nov 5 16:03:23.477847 systemd[1]: Started sshd@8-172.238.168.232:22-139.178.89.65:57454.service - OpenSSH per-connection server daemon (139.178.89.65:57454). Nov 5 16:03:23.817309 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 57454 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:23.819450 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:23.826902 systemd-logind[1592]: New session 9 of user core. Nov 5 16:03:23.834080 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 16:03:24.139066 sshd[4152]: Connection closed by 139.178.89.65 port 57454 Nov 5 16:03:24.140135 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:24.146515 systemd-logind[1592]: Session 9 logged out. Waiting for processes to exit. Nov 5 16:03:24.146859 systemd[1]: sshd@8-172.238.168.232:22-139.178.89.65:57454.service: Deactivated successfully. Nov 5 16:03:24.149282 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 16:03:24.151559 systemd-logind[1592]: Removed session 9. Nov 5 16:03:24.824463 kubelet[2792]: E1105 16:03:24.824361 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:03:29.206059 systemd[1]: Started sshd@9-172.238.168.232:22-139.178.89.65:39024.service - OpenSSH per-connection server daemon (139.178.89.65:39024). Nov 5 16:03:29.564053 sshd[4164]: Accepted publickey for core from 139.178.89.65 port 39024 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:29.565273 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:29.572733 systemd-logind[1592]: New session 10 of user core. Nov 5 16:03:29.578104 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 16:03:29.897246 sshd[4167]: Connection closed by 139.178.89.65 port 39024 Nov 5 16:03:29.898035 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:29.903840 systemd[1]: sshd@9-172.238.168.232:22-139.178.89.65:39024.service: Deactivated successfully. Nov 5 16:03:29.907316 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 16:03:29.908684 systemd-logind[1592]: Session 10 logged out. Waiting for processes to exit. Nov 5 16:03:29.913190 systemd-logind[1592]: Removed session 10. Nov 5 16:03:30.966348 update_engine[1597]: I20251105 16:03:30.966269 1597 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 5 16:03:30.966348 update_engine[1597]: I20251105 16:03:30.966337 1597 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 5 16:03:30.966784 update_engine[1597]: I20251105 16:03:30.966570 1597 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 5 16:03:30.967498 update_engine[1597]: I20251105 16:03:30.967377 1597 omaha_request_params.cc:62] Current group set to alpha Nov 5 16:03:30.968111 update_engine[1597]: I20251105 16:03:30.967759 1597 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 5 16:03:30.968111 update_engine[1597]: I20251105 16:03:30.967779 1597 update_attempter.cc:643] Scheduling an action processor start. Nov 5 16:03:30.968111 update_engine[1597]: I20251105 16:03:30.967798 1597 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 5 16:03:30.968111 update_engine[1597]: I20251105 16:03:30.967833 1597 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 5 16:03:30.968111 update_engine[1597]: I20251105 16:03:30.967895 1597 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 5 16:03:30.968111 update_engine[1597]: I20251105 16:03:30.967903 1597 omaha_request_action.cc:272] Request: Nov 5 16:03:30.968111 update_engine[1597]: Nov 5 16:03:30.968111 update_engine[1597]: Nov 5 16:03:30.968111 update_engine[1597]: Nov 5 16:03:30.968111 update_engine[1597]: Nov 5 16:03:30.968111 update_engine[1597]: Nov 5 16:03:30.968111 update_engine[1597]: Nov 5 16:03:30.968111 update_engine[1597]: Nov 5 16:03:30.968111 update_engine[1597]: Nov 5 16:03:30.968111 update_engine[1597]: I20251105 16:03:30.967911 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 16:03:30.968441 locksmithd[1647]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 5 16:03:30.969093 update_engine[1597]: I20251105 16:03:30.969066 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 16:03:30.970051 update_engine[1597]: I20251105 16:03:30.970012 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 16:03:30.989227 update_engine[1597]: E20251105 16:03:30.989152 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 16:03:30.989227 update_engine[1597]: I20251105 16:03:30.989238 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 5 16:03:34.963787 systemd[1]: Started sshd@10-172.238.168.232:22-139.178.89.65:39028.service - OpenSSH per-connection server daemon (139.178.89.65:39028). Nov 5 16:03:35.328304 sshd[4180]: Accepted publickey for core from 139.178.89.65 port 39028 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:35.330160 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:35.336992 systemd-logind[1592]: New session 11 of user core. Nov 5 16:03:35.343094 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 16:03:35.674370 sshd[4183]: Connection closed by 139.178.89.65 port 39028 Nov 5 16:03:35.675442 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:35.683084 systemd[1]: sshd@10-172.238.168.232:22-139.178.89.65:39028.service: Deactivated successfully. Nov 5 16:03:35.686426 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 16:03:35.688134 systemd-logind[1592]: Session 11 logged out. Waiting for processes to exit. Nov 5 16:03:35.690658 systemd-logind[1592]: Removed session 11. Nov 5 16:03:40.744543 systemd[1]: Started sshd@11-172.238.168.232:22-139.178.89.65:42378.service - OpenSSH per-connection server daemon (139.178.89.65:42378). Nov 5 16:03:40.967823 update_engine[1597]: I20251105 16:03:40.967715 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 16:03:40.967823 update_engine[1597]: I20251105 16:03:40.967849 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 16:03:40.968455 update_engine[1597]: I20251105 16:03:40.968421 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 16:03:40.970144 update_engine[1597]: E20251105 16:03:40.970096 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 16:03:40.970194 update_engine[1597]: I20251105 16:03:40.970156 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 5 16:03:41.116793 sshd[4196]: Accepted publickey for core from 139.178.89.65 port 42378 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:41.122078 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:41.131167 systemd-logind[1592]: New session 12 of user core. Nov 5 16:03:41.139063 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 16:03:41.450102 sshd[4199]: Connection closed by 139.178.89.65 port 42378 Nov 5 16:03:41.450831 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:41.458652 systemd[1]: sshd@11-172.238.168.232:22-139.178.89.65:42378.service: Deactivated successfully. Nov 5 16:03:41.462747 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 16:03:41.465701 systemd-logind[1592]: Session 12 logged out. Waiting for processes to exit. Nov 5 16:03:41.469465 systemd-logind[1592]: Removed session 12. Nov 5 16:03:41.522323 systemd[1]: Started sshd@12-172.238.168.232:22-139.178.89.65:42392.service - OpenSSH per-connection server daemon (139.178.89.65:42392). Nov 5 16:03:41.893855 sshd[4212]: Accepted publickey for core from 139.178.89.65 port 42392 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:41.896071 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:41.902001 systemd-logind[1592]: New session 13 of user core. Nov 5 16:03:41.909092 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 16:03:42.268286 sshd[4218]: Connection closed by 139.178.89.65 port 42392 Nov 5 16:03:42.273763 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:42.281758 systemd[1]: sshd@12-172.238.168.232:22-139.178.89.65:42392.service: Deactivated successfully. Nov 5 16:03:42.285088 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 16:03:42.287055 systemd-logind[1592]: Session 13 logged out. Waiting for processes to exit. Nov 5 16:03:42.289485 systemd-logind[1592]: Removed session 13. Nov 5 16:03:42.328271 systemd[1]: Started sshd@13-172.238.168.232:22-139.178.89.65:42394.service - OpenSSH per-connection server daemon (139.178.89.65:42394). Nov 5 16:03:42.669708 sshd[4228]: Accepted publickey for core from 139.178.89.65 port 42394 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:42.672064 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:42.680723 systemd-logind[1592]: New session 14 of user core. Nov 5 16:03:42.688091 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 16:03:42.992897 sshd[4231]: Connection closed by 139.178.89.65 port 42394 Nov 5 16:03:42.993880 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:43.001089 systemd[1]: sshd@13-172.238.168.232:22-139.178.89.65:42394.service: Deactivated successfully. Nov 5 16:03:43.004283 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 16:03:43.006124 systemd-logind[1592]: Session 14 logged out. Waiting for processes to exit. Nov 5 16:03:43.008578 systemd-logind[1592]: Removed session 14. Nov 5 16:03:44.824670 kubelet[2792]: E1105 16:03:44.824627 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:03:47.825443 kubelet[2792]: E1105 16:03:47.824837 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:03:48.054960 systemd[1]: Started sshd@14-172.238.168.232:22-139.178.89.65:50270.service - OpenSSH per-connection server daemon (139.178.89.65:50270). Nov 5 16:03:48.393987 sshd[4242]: Accepted publickey for core from 139.178.89.65 port 50270 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:48.395664 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:48.401812 systemd-logind[1592]: New session 15 of user core. Nov 5 16:03:48.408069 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 16:03:48.700898 sshd[4245]: Connection closed by 139.178.89.65 port 50270 Nov 5 16:03:48.701837 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:48.707492 systemd-logind[1592]: Session 15 logged out. Waiting for processes to exit. Nov 5 16:03:48.707791 systemd[1]: sshd@14-172.238.168.232:22-139.178.89.65:50270.service: Deactivated successfully. Nov 5 16:03:48.710564 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 16:03:48.713324 systemd-logind[1592]: Removed session 15. Nov 5 16:03:50.824408 kubelet[2792]: E1105 16:03:50.824335 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:03:50.967586 update_engine[1597]: I20251105 16:03:50.967501 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 16:03:50.968097 update_engine[1597]: I20251105 16:03:50.967605 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 16:03:50.968097 update_engine[1597]: I20251105 16:03:50.968068 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 16:03:50.969385 update_engine[1597]: E20251105 16:03:50.969322 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 16:03:50.969509 update_engine[1597]: I20251105 16:03:50.969403 1597 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 5 16:03:53.766397 systemd[1]: Started sshd@15-172.238.168.232:22-139.178.89.65:50286.service - OpenSSH per-connection server daemon (139.178.89.65:50286). Nov 5 16:03:54.120840 sshd[4257]: Accepted publickey for core from 139.178.89.65 port 50286 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:54.122901 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:54.129208 systemd-logind[1592]: New session 16 of user core. Nov 5 16:03:54.136080 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 16:03:54.440573 sshd[4260]: Connection closed by 139.178.89.65 port 50286 Nov 5 16:03:54.441284 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Nov 5 16:03:54.446339 systemd-logind[1592]: Session 16 logged out. Waiting for processes to exit. Nov 5 16:03:54.447615 systemd[1]: sshd@15-172.238.168.232:22-139.178.89.65:50286.service: Deactivated successfully. Nov 5 16:03:54.450716 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 16:03:54.453256 systemd-logind[1592]: Removed session 16. Nov 5 16:03:59.507859 systemd[1]: Started sshd@16-172.238.168.232:22-139.178.89.65:56912.service - OpenSSH per-connection server daemon (139.178.89.65:56912). Nov 5 16:03:59.861739 sshd[4272]: Accepted publickey for core from 139.178.89.65 port 56912 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:03:59.863739 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:03:59.869295 systemd-logind[1592]: New session 17 of user core. Nov 5 16:03:59.872185 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 16:04:00.184873 sshd[4275]: Connection closed by 139.178.89.65 port 56912 Nov 5 16:04:00.185537 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:00.190831 systemd[1]: sshd@16-172.238.168.232:22-139.178.89.65:56912.service: Deactivated successfully. Nov 5 16:04:00.192718 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 16:04:00.193835 systemd-logind[1592]: Session 17 logged out. Waiting for processes to exit. Nov 5 16:04:00.195906 systemd-logind[1592]: Removed session 17. Nov 5 16:04:00.244010 systemd[1]: Started sshd@17-172.238.168.232:22-139.178.89.65:56928.service - OpenSSH per-connection server daemon (139.178.89.65:56928). Nov 5 16:04:00.590614 sshd[4287]: Accepted publickey for core from 139.178.89.65 port 56928 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:00.593002 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:00.599252 systemd-logind[1592]: New session 18 of user core. Nov 5 16:04:00.604232 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 16:04:00.926112 sshd[4290]: Connection closed by 139.178.89.65 port 56928 Nov 5 16:04:00.926897 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:00.931645 systemd[1]: sshd@17-172.238.168.232:22-139.178.89.65:56928.service: Deactivated successfully. Nov 5 16:04:00.934401 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 16:04:00.935557 systemd-logind[1592]: Session 18 logged out. Waiting for processes to exit. Nov 5 16:04:00.937954 systemd-logind[1592]: Removed session 18. Nov 5 16:04:00.966015 update_engine[1597]: I20251105 16:04:00.965963 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 16:04:00.966290 update_engine[1597]: I20251105 16:04:00.966048 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 16:04:00.966486 update_engine[1597]: I20251105 16:04:00.966451 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 16:04:00.967130 update_engine[1597]: E20251105 16:04:00.967092 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 16:04:00.967203 update_engine[1597]: I20251105 16:04:00.967146 1597 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 5 16:04:00.967203 update_engine[1597]: I20251105 16:04:00.967161 1597 omaha_request_action.cc:617] Omaha request response: Nov 5 16:04:00.967264 update_engine[1597]: E20251105 16:04:00.967231 1597 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 5 16:04:00.967288 update_engine[1597]: I20251105 16:04:00.967267 1597 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 5 16:04:00.967288 update_engine[1597]: I20251105 16:04:00.967279 1597 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 5 16:04:00.967330 update_engine[1597]: I20251105 16:04:00.967286 1597 update_attempter.cc:306] Processing Done. Nov 5 16:04:00.967330 update_engine[1597]: E20251105 16:04:00.967303 1597 update_attempter.cc:619] Update failed. Nov 5 16:04:00.967330 update_engine[1597]: I20251105 16:04:00.967309 1597 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 5 16:04:00.967330 update_engine[1597]: I20251105 16:04:00.967316 1597 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 5 16:04:00.967330 update_engine[1597]: I20251105 16:04:00.967322 1597 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 5 16:04:00.967425 update_engine[1597]: I20251105 16:04:00.967382 1597 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 5 16:04:00.967425 update_engine[1597]: I20251105 16:04:00.967406 1597 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 5 16:04:00.967425 update_engine[1597]: I20251105 16:04:00.967411 1597 omaha_request_action.cc:272] Request: Nov 5 16:04:00.967425 update_engine[1597]: Nov 5 16:04:00.967425 update_engine[1597]: Nov 5 16:04:00.967425 update_engine[1597]: Nov 5 16:04:00.967425 update_engine[1597]: Nov 5 16:04:00.967425 update_engine[1597]: Nov 5 16:04:00.967425 update_engine[1597]: Nov 5 16:04:00.967425 update_engine[1597]: I20251105 16:04:00.967419 1597 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 16:04:00.967719 update_engine[1597]: I20251105 16:04:00.967442 1597 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 16:04:00.968132 update_engine[1597]: I20251105 16:04:00.968047 1597 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 16:04:00.968626 update_engine[1597]: E20251105 16:04:00.968494 1597 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 16:04:00.968626 update_engine[1597]: I20251105 16:04:00.968536 1597 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 5 16:04:00.968626 update_engine[1597]: I20251105 16:04:00.968545 1597 omaha_request_action.cc:617] Omaha request response: Nov 5 16:04:00.968626 update_engine[1597]: I20251105 16:04:00.968554 1597 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 5 16:04:00.968731 locksmithd[1647]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 5 16:04:00.969238 update_engine[1597]: I20251105 16:04:00.969163 1597 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 5 16:04:00.969238 update_engine[1597]: I20251105 16:04:00.969187 1597 update_attempter.cc:306] Processing Done. Nov 5 16:04:00.969238 update_engine[1597]: I20251105 16:04:00.969202 1597 update_attempter.cc:310] Error event sent. Nov 5 16:04:00.969304 update_engine[1597]: I20251105 16:04:00.969240 1597 update_check_scheduler.cc:74] Next update check in 42m2s Nov 5 16:04:00.969619 locksmithd[1647]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 5 16:04:00.989255 systemd[1]: Started sshd@18-172.238.168.232:22-139.178.89.65:56930.service - OpenSSH per-connection server daemon (139.178.89.65:56930). Nov 5 16:04:01.352826 sshd[4299]: Accepted publickey for core from 139.178.89.65 port 56930 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:01.355131 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:01.362420 systemd-logind[1592]: New session 19 of user core. Nov 5 16:04:01.376085 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 16:04:02.272832 sshd[4302]: Connection closed by 139.178.89.65 port 56930 Nov 5 16:04:02.274302 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:02.280354 systemd-logind[1592]: Session 19 logged out. Waiting for processes to exit. Nov 5 16:04:02.281427 systemd[1]: sshd@18-172.238.168.232:22-139.178.89.65:56930.service: Deactivated successfully. Nov 5 16:04:02.286881 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 16:04:02.292168 systemd-logind[1592]: Removed session 19. Nov 5 16:04:02.340151 systemd[1]: Started sshd@19-172.238.168.232:22-139.178.89.65:56946.service - OpenSSH per-connection server daemon (139.178.89.65:56946). Nov 5 16:04:02.692646 sshd[4320]: Accepted publickey for core from 139.178.89.65 port 56946 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:02.694950 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:02.702721 systemd-logind[1592]: New session 20 of user core. Nov 5 16:04:02.709113 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 16:04:03.162239 sshd[4323]: Connection closed by 139.178.89.65 port 56946 Nov 5 16:04:03.164123 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:03.169871 systemd-logind[1592]: Session 20 logged out. Waiting for processes to exit. Nov 5 16:04:03.170242 systemd[1]: sshd@19-172.238.168.232:22-139.178.89.65:56946.service: Deactivated successfully. Nov 5 16:04:03.173336 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 16:04:03.175532 systemd-logind[1592]: Removed session 20. Nov 5 16:04:03.226158 systemd[1]: Started sshd@20-172.238.168.232:22-139.178.89.65:56954.service - OpenSSH per-connection server daemon (139.178.89.65:56954). Nov 5 16:04:03.584468 sshd[4333]: Accepted publickey for core from 139.178.89.65 port 56954 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:03.586236 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:03.593316 systemd-logind[1592]: New session 21 of user core. Nov 5 16:04:03.600165 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 16:04:03.907536 sshd[4336]: Connection closed by 139.178.89.65 port 56954 Nov 5 16:04:03.908004 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:03.915312 systemd-logind[1592]: Session 21 logged out. Waiting for processes to exit. Nov 5 16:04:03.916229 systemd[1]: sshd@20-172.238.168.232:22-139.178.89.65:56954.service: Deactivated successfully. Nov 5 16:04:03.919457 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 16:04:03.922087 systemd-logind[1592]: Removed session 21. Nov 5 16:04:07.825512 kubelet[2792]: E1105 16:04:07.824945 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:08.976701 systemd[1]: Started sshd@21-172.238.168.232:22-139.178.89.65:52100.service - OpenSSH per-connection server daemon (139.178.89.65:52100). Nov 5 16:04:09.345207 sshd[4350]: Accepted publickey for core from 139.178.89.65 port 52100 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:09.347445 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:09.355063 systemd-logind[1592]: New session 22 of user core. Nov 5 16:04:09.360094 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 16:04:09.660330 sshd[4353]: Connection closed by 139.178.89.65 port 52100 Nov 5 16:04:09.661147 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:09.666659 systemd[1]: sshd@21-172.238.168.232:22-139.178.89.65:52100.service: Deactivated successfully. Nov 5 16:04:09.669742 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 16:04:09.670904 systemd-logind[1592]: Session 22 logged out. Waiting for processes to exit. Nov 5 16:04:09.673028 systemd-logind[1592]: Removed session 22. Nov 5 16:04:13.827953 kubelet[2792]: E1105 16:04:13.827892 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:14.730966 systemd[1]: Started sshd@22-172.238.168.232:22-139.178.89.65:52116.service - OpenSSH per-connection server daemon (139.178.89.65:52116). Nov 5 16:04:14.825984 kubelet[2792]: E1105 16:04:14.825907 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:15.101385 sshd[4369]: Accepted publickey for core from 139.178.89.65 port 52116 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:15.103647 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:15.110067 systemd-logind[1592]: New session 23 of user core. Nov 5 16:04:15.120093 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 16:04:15.428947 sshd[4372]: Connection closed by 139.178.89.65 port 52116 Nov 5 16:04:15.429705 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:15.437580 systemd[1]: sshd@22-172.238.168.232:22-139.178.89.65:52116.service: Deactivated successfully. Nov 5 16:04:15.441063 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 16:04:15.442870 systemd-logind[1592]: Session 23 logged out. Waiting for processes to exit. Nov 5 16:04:15.445437 systemd-logind[1592]: Removed session 23. Nov 5 16:04:20.498176 systemd[1]: Started sshd@23-172.238.168.232:22-139.178.89.65:36384.service - OpenSSH per-connection server daemon (139.178.89.65:36384). Nov 5 16:04:20.864340 sshd[4384]: Accepted publickey for core from 139.178.89.65 port 36384 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:20.866282 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:20.873777 systemd-logind[1592]: New session 24 of user core. Nov 5 16:04:20.878090 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 16:04:21.199695 sshd[4387]: Connection closed by 139.178.89.65 port 36384 Nov 5 16:04:21.200974 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:21.208216 systemd[1]: sshd@23-172.238.168.232:22-139.178.89.65:36384.service: Deactivated successfully. Nov 5 16:04:21.211941 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 16:04:21.213198 systemd-logind[1592]: Session 24 logged out. Waiting for processes to exit. Nov 5 16:04:21.215280 systemd-logind[1592]: Removed session 24. Nov 5 16:04:21.262658 systemd[1]: Started sshd@24-172.238.168.232:22-139.178.89.65:36398.service - OpenSSH per-connection server daemon (139.178.89.65:36398). Nov 5 16:04:21.606053 sshd[4399]: Accepted publickey for core from 139.178.89.65 port 36398 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:21.608015 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:21.616188 systemd-logind[1592]: New session 25 of user core. Nov 5 16:04:21.618058 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 16:04:23.250257 containerd[1607]: time="2025-11-05T16:04:23.250155643Z" level=info msg="StopContainer for \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" with timeout 30 (s)" Nov 5 16:04:23.252710 containerd[1607]: time="2025-11-05T16:04:23.252559407Z" level=info msg="Stop container \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" with signal terminated" Nov 5 16:04:23.276588 systemd[1]: cri-containerd-315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583.scope: Deactivated successfully. Nov 5 16:04:23.285813 containerd[1607]: time="2025-11-05T16:04:23.285565222Z" level=info msg="received exit event container_id:\"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" id:\"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" pid:3325 exited_at:{seconds:1762358663 nanos:284632612}" Nov 5 16:04:23.286125 containerd[1607]: time="2025-11-05T16:04:23.286101777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" id:\"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" pid:3325 exited_at:{seconds:1762358663 nanos:284632612}" Nov 5 16:04:23.291346 containerd[1607]: time="2025-11-05T16:04:23.291309101Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 16:04:23.300947 containerd[1607]: time="2025-11-05T16:04:23.300889628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" id:\"a176636616103726e9b634a6646a3d8ec9df38139c3514f8939d22294c8ed1a9\" pid:4430 exited_at:{seconds:1762358663 nanos:300484992}" Nov 5 16:04:23.308152 containerd[1607]: time="2025-11-05T16:04:23.308116250Z" level=info msg="StopContainer for \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" with timeout 2 (s)" Nov 5 16:04:23.309029 containerd[1607]: time="2025-11-05T16:04:23.308980800Z" level=info msg="Stop container \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" with signal terminated" Nov 5 16:04:23.329680 systemd-networkd[1516]: lxc_health: Link DOWN Nov 5 16:04:23.330155 systemd-networkd[1516]: lxc_health: Lost carrier Nov 5 16:04:23.354903 systemd[1]: cri-containerd-570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f.scope: Deactivated successfully. Nov 5 16:04:23.355637 systemd[1]: cri-containerd-570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f.scope: Consumed 7.243s CPU time, 124.2M memory peak, 144K read from disk, 13.3M written to disk. Nov 5 16:04:23.369035 containerd[1607]: time="2025-11-05T16:04:23.366117296Z" level=info msg="received exit event container_id:\"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" id:\"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" pid:3455 exited_at:{seconds:1762358663 nanos:365780819}" Nov 5 16:04:23.369035 containerd[1607]: time="2025-11-05T16:04:23.366358553Z" level=info msg="TaskExit event in podsandbox handler container_id:\"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" id:\"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" pid:3455 exited_at:{seconds:1762358663 nanos:365780819}" Nov 5 16:04:23.371006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583-rootfs.mount: Deactivated successfully. Nov 5 16:04:23.384634 containerd[1607]: time="2025-11-05T16:04:23.384588237Z" level=info msg="StopContainer for \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" returns successfully" Nov 5 16:04:23.387450 containerd[1607]: time="2025-11-05T16:04:23.387261458Z" level=info msg="StopPodSandbox for \"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\"" Nov 5 16:04:23.387450 containerd[1607]: time="2025-11-05T16:04:23.387351937Z" level=info msg="Container to stop \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 16:04:23.411441 systemd[1]: cri-containerd-374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a.scope: Deactivated successfully. Nov 5 16:04:23.417691 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f-rootfs.mount: Deactivated successfully. Nov 5 16:04:23.421686 containerd[1607]: time="2025-11-05T16:04:23.421617149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\" id:\"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\" pid:3017 exit_status:137 exited_at:{seconds:1762358663 nanos:419717759}" Nov 5 16:04:23.433188 containerd[1607]: time="2025-11-05T16:04:23.433125475Z" level=info msg="StopContainer for \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" returns successfully" Nov 5 16:04:23.434156 containerd[1607]: time="2025-11-05T16:04:23.434134654Z" level=info msg="StopPodSandbox for \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\"" Nov 5 16:04:23.434803 containerd[1607]: time="2025-11-05T16:04:23.434683368Z" level=info msg="Container to stop \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 16:04:23.434803 containerd[1607]: time="2025-11-05T16:04:23.434734418Z" level=info msg="Container to stop \"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 16:04:23.434803 containerd[1607]: time="2025-11-05T16:04:23.434747608Z" level=info msg="Container to stop \"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 16:04:23.434803 containerd[1607]: time="2025-11-05T16:04:23.434757677Z" level=info msg="Container to stop \"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 16:04:23.434803 containerd[1607]: time="2025-11-05T16:04:23.434767057Z" level=info msg="Container to stop \"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 16:04:23.446197 systemd[1]: cri-containerd-799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4.scope: Deactivated successfully. Nov 5 16:04:23.483433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a-rootfs.mount: Deactivated successfully. Nov 5 16:04:23.493045 containerd[1607]: time="2025-11-05T16:04:23.492978051Z" level=info msg="shim disconnected" id=374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a namespace=k8s.io Nov 5 16:04:23.493287 containerd[1607]: time="2025-11-05T16:04:23.493221578Z" level=warning msg="cleaning up after shim disconnected" id=374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a namespace=k8s.io Nov 5 16:04:23.493371 containerd[1607]: time="2025-11-05T16:04:23.493242648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 16:04:23.501491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4-rootfs.mount: Deactivated successfully. Nov 5 16:04:23.505218 containerd[1607]: time="2025-11-05T16:04:23.505143030Z" level=info msg="shim disconnected" id=799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4 namespace=k8s.io Nov 5 16:04:23.505218 containerd[1607]: time="2025-11-05T16:04:23.505188940Z" level=warning msg="cleaning up after shim disconnected" id=799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4 namespace=k8s.io Nov 5 16:04:23.505357 containerd[1607]: time="2025-11-05T16:04:23.505201870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 16:04:23.521466 containerd[1607]: time="2025-11-05T16:04:23.521406495Z" level=info msg="TaskExit event in podsandbox handler container_id:\"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" id:\"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" pid:2944 exit_status:137 exited_at:{seconds:1762358663 nanos:455391875}" Nov 5 16:04:23.521578 containerd[1607]: time="2025-11-05T16:04:23.521542584Z" level=info msg="received exit event sandbox_id:\"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" exit_status:137 exited_at:{seconds:1762358663 nanos:455391875}" Nov 5 16:04:23.524410 containerd[1607]: time="2025-11-05T16:04:23.524102996Z" level=info msg="received exit event sandbox_id:\"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\" exit_status:137 exited_at:{seconds:1762358663 nanos:419717759}" Nov 5 16:04:23.526749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a-shm.mount: Deactivated successfully. Nov 5 16:04:23.527875 containerd[1607]: time="2025-11-05T16:04:23.527184893Z" level=info msg="TearDown network for sandbox \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" successfully" Nov 5 16:04:23.528779 containerd[1607]: time="2025-11-05T16:04:23.528757336Z" level=info msg="StopPodSandbox for \"799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4\" returns successfully" Nov 5 16:04:23.528904 containerd[1607]: time="2025-11-05T16:04:23.527219483Z" level=info msg="TearDown network for sandbox \"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\" successfully" Nov 5 16:04:23.529073 containerd[1607]: time="2025-11-05T16:04:23.528998984Z" level=info msg="StopPodSandbox for \"374088dccc2de27d42fbd7e734372e7b7bb9941c884baf9f76587f92b722997a\" returns successfully" Nov 5 16:04:23.639719 kubelet[2792]: I1105 16:04:23.639622 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-lib-modules\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.639719 kubelet[2792]: I1105 16:04:23.639679 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-cgroup\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.639719 kubelet[2792]: I1105 16:04:23.639703 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-host-proc-sys-net\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.639719 kubelet[2792]: I1105 16:04:23.639722 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-hostproc\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.639719 kubelet[2792]: I1105 16:04:23.639737 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-xtables-lock\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.639719 kubelet[2792]: I1105 16:04:23.639752 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-etc-cni-netd\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640717 kubelet[2792]: I1105 16:04:23.639777 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-config-path\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640717 kubelet[2792]: I1105 16:04:23.639801 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf56dd2d-c28e-43a1-ad0d-389c305a2298-clustermesh-secrets\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640717 kubelet[2792]: I1105 16:04:23.639818 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf56dd2d-c28e-43a1-ad0d-389c305a2298-hubble-tls\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640717 kubelet[2792]: I1105 16:04:23.639831 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-run\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640717 kubelet[2792]: I1105 16:04:23.639878 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrm77\" (UniqueName: \"kubernetes.io/projected/8d3ec580-976e-480e-b670-ca8f41be0ed4-kube-api-access-lrm77\") pod \"8d3ec580-976e-480e-b670-ca8f41be0ed4\" (UID: \"8d3ec580-976e-480e-b670-ca8f41be0ed4\") " Nov 5 16:04:23.640717 kubelet[2792]: I1105 16:04:23.639897 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cni-path\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640999 kubelet[2792]: I1105 16:04:23.639916 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-987fb\" (UniqueName: \"kubernetes.io/projected/bf56dd2d-c28e-43a1-ad0d-389c305a2298-kube-api-access-987fb\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640999 kubelet[2792]: I1105 16:04:23.639980 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-host-proc-sys-kernel\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640999 kubelet[2792]: I1105 16:04:23.640013 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d3ec580-976e-480e-b670-ca8f41be0ed4-cilium-config-path\") pod \"8d3ec580-976e-480e-b670-ca8f41be0ed4\" (UID: \"8d3ec580-976e-480e-b670-ca8f41be0ed4\") " Nov 5 16:04:23.640999 kubelet[2792]: I1105 16:04:23.640029 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-bpf-maps\") pod \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\" (UID: \"bf56dd2d-c28e-43a1-ad0d-389c305a2298\") " Nov 5 16:04:23.640999 kubelet[2792]: I1105 16:04:23.640112 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.641135 kubelet[2792]: I1105 16:04:23.640153 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.641135 kubelet[2792]: I1105 16:04:23.640187 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.641135 kubelet[2792]: I1105 16:04:23.640222 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.641135 kubelet[2792]: I1105 16:04:23.640246 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-hostproc" (OuterVolumeSpecName: "hostproc") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.641135 kubelet[2792]: I1105 16:04:23.640261 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.641267 kubelet[2792]: I1105 16:04:23.640276 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.644181 kubelet[2792]: I1105 16:04:23.644077 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cni-path" (OuterVolumeSpecName: "cni-path") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.644949 kubelet[2792]: I1105 16:04:23.644652 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.644949 kubelet[2792]: I1105 16:04:23.644836 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 16:04:23.646010 kubelet[2792]: I1105 16:04:23.645902 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 16:04:23.653519 kubelet[2792]: I1105 16:04:23.653495 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d3ec580-976e-480e-b670-ca8f41be0ed4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d3ec580-976e-480e-b670-ca8f41be0ed4" (UID: "8d3ec580-976e-480e-b670-ca8f41be0ed4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 16:04:23.654216 kubelet[2792]: I1105 16:04:23.654047 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf56dd2d-c28e-43a1-ad0d-389c305a2298-kube-api-access-987fb" (OuterVolumeSpecName: "kube-api-access-987fb") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "kube-api-access-987fb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 16:04:23.654296 kubelet[2792]: I1105 16:04:23.654112 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d3ec580-976e-480e-b670-ca8f41be0ed4-kube-api-access-lrm77" (OuterVolumeSpecName: "kube-api-access-lrm77") pod "8d3ec580-976e-480e-b670-ca8f41be0ed4" (UID: "8d3ec580-976e-480e-b670-ca8f41be0ed4"). InnerVolumeSpecName "kube-api-access-lrm77". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 16:04:23.654636 kubelet[2792]: I1105 16:04:23.654585 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf56dd2d-c28e-43a1-ad0d-389c305a2298-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 16:04:23.655363 kubelet[2792]: I1105 16:04:23.655317 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf56dd2d-c28e-43a1-ad0d-389c305a2298-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bf56dd2d-c28e-43a1-ad0d-389c305a2298" (UID: "bf56dd2d-c28e-43a1-ad0d-389c305a2298"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 16:04:23.741195 kubelet[2792]: I1105 16:04:23.741109 2792 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-bpf-maps\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741195 kubelet[2792]: I1105 16:04:23.741154 2792 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-lib-modules\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741195 kubelet[2792]: I1105 16:04:23.741167 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-cgroup\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741195 kubelet[2792]: I1105 16:04:23.741178 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-host-proc-sys-net\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741195 kubelet[2792]: I1105 16:04:23.741190 2792 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-hostproc\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741195 kubelet[2792]: I1105 16:04:23.741198 2792 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-xtables-lock\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741195 kubelet[2792]: I1105 16:04:23.741207 2792 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-etc-cni-netd\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741195 kubelet[2792]: I1105 16:04:23.741215 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-config-path\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741693 kubelet[2792]: I1105 16:04:23.741226 2792 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf56dd2d-c28e-43a1-ad0d-389c305a2298-clustermesh-secrets\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741693 kubelet[2792]: I1105 16:04:23.741236 2792 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf56dd2d-c28e-43a1-ad0d-389c305a2298-hubble-tls\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741693 kubelet[2792]: I1105 16:04:23.741245 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cilium-run\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741693 kubelet[2792]: I1105 16:04:23.741256 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lrm77\" (UniqueName: \"kubernetes.io/projected/8d3ec580-976e-480e-b670-ca8f41be0ed4-kube-api-access-lrm77\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741693 kubelet[2792]: I1105 16:04:23.741266 2792 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-cni-path\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741693 kubelet[2792]: I1105 16:04:23.741276 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-987fb\" (UniqueName: \"kubernetes.io/projected/bf56dd2d-c28e-43a1-ad0d-389c305a2298-kube-api-access-987fb\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741693 kubelet[2792]: I1105 16:04:23.741285 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf56dd2d-c28e-43a1-ad0d-389c305a2298-host-proc-sys-kernel\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.741693 kubelet[2792]: I1105 16:04:23.741294 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d3ec580-976e-480e-b670-ca8f41be0ed4-cilium-config-path\") on node \"172-238-168-232\" DevicePath \"\"" Nov 5 16:04:23.837663 systemd[1]: Removed slice kubepods-burstable-podbf56dd2d_c28e_43a1_ad0d_389c305a2298.slice - libcontainer container kubepods-burstable-podbf56dd2d_c28e_43a1_ad0d_389c305a2298.slice. Nov 5 16:04:23.837764 systemd[1]: kubepods-burstable-podbf56dd2d_c28e_43a1_ad0d_389c305a2298.slice: Consumed 7.383s CPU time, 124.6M memory peak, 144K read from disk, 13.3M written to disk. Nov 5 16:04:23.841603 systemd[1]: Removed slice kubepods-besteffort-pod8d3ec580_976e_480e_b670_ca8f41be0ed4.slice - libcontainer container kubepods-besteffort-pod8d3ec580_976e_480e_b670_ca8f41be0ed4.slice. Nov 5 16:04:24.362848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-799a14feea7247d4e9db0409de53084266754338adaaa533cafa9549599435b4-shm.mount: Deactivated successfully. Nov 5 16:04:24.363032 systemd[1]: var-lib-kubelet-pods-8d3ec580\x2d976e\x2d480e\x2db670\x2dca8f41be0ed4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlrm77.mount: Deactivated successfully. Nov 5 16:04:24.363134 systemd[1]: var-lib-kubelet-pods-bf56dd2d\x2dc28e\x2d43a1\x2dad0d\x2d389c305a2298-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d987fb.mount: Deactivated successfully. Nov 5 16:04:24.363214 systemd[1]: var-lib-kubelet-pods-bf56dd2d\x2dc28e\x2d43a1\x2dad0d\x2d389c305a2298-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 5 16:04:24.363288 systemd[1]: var-lib-kubelet-pods-bf56dd2d\x2dc28e\x2d43a1\x2dad0d\x2d389c305a2298-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 5 16:04:24.526058 kubelet[2792]: I1105 16:04:24.525913 2792 scope.go:117] "RemoveContainer" containerID="315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583" Nov 5 16:04:24.531554 containerd[1607]: time="2025-11-05T16:04:24.531196394Z" level=info msg="RemoveContainer for \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\"" Nov 5 16:04:24.538315 containerd[1607]: time="2025-11-05T16:04:24.538180990Z" level=info msg="RemoveContainer for \"315066a8de938c94010a1a1a3fc13637091d5a0529203a5bf637eb384e046583\" returns successfully" Nov 5 16:04:24.541215 kubelet[2792]: I1105 16:04:24.541167 2792 scope.go:117] "RemoveContainer" containerID="570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f" Nov 5 16:04:24.559188 containerd[1607]: time="2025-11-05T16:04:24.558911249Z" level=info msg="RemoveContainer for \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\"" Nov 5 16:04:24.571651 containerd[1607]: time="2025-11-05T16:04:24.571386626Z" level=info msg="RemoveContainer for \"570477bf7eece93e3441559861dfe62ee59e5e27ba10eb1e77d56b85dc3df94f\" returns successfully" Nov 5 16:04:24.572046 kubelet[2792]: I1105 16:04:24.572022 2792 scope.go:117] "RemoveContainer" containerID="c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0" Nov 5 16:04:24.574794 containerd[1607]: time="2025-11-05T16:04:24.574752240Z" level=info msg="RemoveContainer for \"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\"" Nov 5 16:04:24.579860 containerd[1607]: time="2025-11-05T16:04:24.579718287Z" level=info msg="RemoveContainer for \"c45923fddfff40d2f0778cbaa9db694a5ebbac55e8b209a1dfaab371b70856b0\" returns successfully" Nov 5 16:04:24.580135 kubelet[2792]: I1105 16:04:24.580092 2792 scope.go:117] "RemoveContainer" containerID="8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab" Nov 5 16:04:24.584118 containerd[1607]: time="2025-11-05T16:04:24.584069061Z" level=info msg="RemoveContainer for \"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\"" Nov 5 16:04:24.591432 containerd[1607]: time="2025-11-05T16:04:24.591349213Z" level=info msg="RemoveContainer for \"8bbde11d26486a181c0504a4b0676ea1b32d780f03e93fc1604fd9ac27859fab\" returns successfully" Nov 5 16:04:24.591835 kubelet[2792]: I1105 16:04:24.591788 2792 scope.go:117] "RemoveContainer" containerID="fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f" Nov 5 16:04:24.594620 containerd[1607]: time="2025-11-05T16:04:24.594478320Z" level=info msg="RemoveContainer for \"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\"" Nov 5 16:04:24.598317 containerd[1607]: time="2025-11-05T16:04:24.598266139Z" level=info msg="RemoveContainer for \"fced8dd4a750209dcae736abf44230355d46d34bb79172565c39e31fe8fa710f\" returns successfully" Nov 5 16:04:24.598601 kubelet[2792]: I1105 16:04:24.598574 2792 scope.go:117] "RemoveContainer" containerID="189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5" Nov 5 16:04:24.600873 containerd[1607]: time="2025-11-05T16:04:24.600776573Z" level=info msg="RemoveContainer for \"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\"" Nov 5 16:04:24.604100 containerd[1607]: time="2025-11-05T16:04:24.604040388Z" level=info msg="RemoveContainer for \"189f8d8e8f34bf64fd48e480325ea9709ea89fbd95e7596d0cb12bb190892be5\" returns successfully" Nov 5 16:04:24.791897 containerd[1607]: time="2025-11-05T16:04:24.791780996Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1762358663 nanos:419717759}" Nov 5 16:04:25.243799 sshd[4404]: Connection closed by 139.178.89.65 port 36398 Nov 5 16:04:25.244217 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:25.251075 systemd[1]: sshd@24-172.238.168.232:22-139.178.89.65:36398.service: Deactivated successfully. Nov 5 16:04:25.253644 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 16:04:25.255038 systemd-logind[1592]: Session 25 logged out. Waiting for processes to exit. Nov 5 16:04:25.257508 systemd-logind[1592]: Removed session 25. Nov 5 16:04:25.319682 systemd[1]: Started sshd@25-172.238.168.232:22-139.178.89.65:36406.service - OpenSSH per-connection server daemon (139.178.89.65:36406). Nov 5 16:04:25.671636 sshd[4563]: Accepted publickey for core from 139.178.89.65 port 36406 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:25.673519 sshd-session[4563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:25.680743 systemd-logind[1592]: New session 26 of user core. Nov 5 16:04:25.686089 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 16:04:25.824964 kubelet[2792]: E1105 16:04:25.824777 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:25.826627 kubelet[2792]: I1105 16:04:25.826398 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d3ec580-976e-480e-b670-ca8f41be0ed4" path="/var/lib/kubelet/pods/8d3ec580-976e-480e-b670-ca8f41be0ed4/volumes" Nov 5 16:04:25.827169 kubelet[2792]: I1105 16:04:25.827137 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf56dd2d-c28e-43a1-ad0d-389c305a2298" path="/var/lib/kubelet/pods/bf56dd2d-c28e-43a1-ad0d-389c305a2298/volumes" Nov 5 16:04:25.971180 kubelet[2792]: E1105 16:04:25.971017 2792 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 5 16:04:26.422100 systemd[1]: Created slice kubepods-burstable-podd7fadd08_16d4_4255_ad73_61eb4a8a461d.slice - libcontainer container kubepods-burstable-podd7fadd08_16d4_4255_ad73_61eb4a8a461d.slice. Nov 5 16:04:26.430106 sshd[4566]: Connection closed by 139.178.89.65 port 36406 Nov 5 16:04:26.432150 sshd-session[4563]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:26.440221 systemd[1]: sshd@25-172.238.168.232:22-139.178.89.65:36406.service: Deactivated successfully. Nov 5 16:04:26.444318 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 16:04:26.448389 systemd-logind[1592]: Session 26 logged out. Waiting for processes to exit. Nov 5 16:04:26.449862 systemd-logind[1592]: Removed session 26. Nov 5 16:04:26.460082 kubelet[2792]: I1105 16:04:26.459901 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-xtables-lock\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.460261 kubelet[2792]: I1105 16:04:26.460243 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-cilium-run\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462019 kubelet[2792]: I1105 16:04:26.461965 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-bpf-maps\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462145 kubelet[2792]: I1105 16:04:26.461993 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-lib-modules\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462280 kubelet[2792]: I1105 16:04:26.462125 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7fadd08-16d4-4255-ad73-61eb4a8a461d-hubble-tls\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462280 kubelet[2792]: I1105 16:04:26.462250 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-cni-path\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462515 kubelet[2792]: I1105 16:04:26.462443 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-host-proc-sys-net\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462623 kubelet[2792]: I1105 16:04:26.462468 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-host-proc-sys-kernel\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462738 kubelet[2792]: I1105 16:04:26.462685 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7fadd08-16d4-4255-ad73-61eb4a8a461d-clustermesh-secrets\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462738 kubelet[2792]: I1105 16:04:26.462712 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-hostproc\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462939 kubelet[2792]: I1105 16:04:26.462877 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-etc-cni-netd\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.462939 kubelet[2792]: I1105 16:04:26.462904 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7fadd08-16d4-4255-ad73-61eb4a8a461d-cilium-cgroup\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.463065 kubelet[2792]: I1105 16:04:26.463048 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7fadd08-16d4-4255-ad73-61eb4a8a461d-cilium-config-path\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.463235 kubelet[2792]: I1105 16:04:26.463184 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d7fadd08-16d4-4255-ad73-61eb4a8a461d-cilium-ipsec-secrets\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.463235 kubelet[2792]: I1105 16:04:26.463209 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6m9n\" (UniqueName: \"kubernetes.io/projected/d7fadd08-16d4-4255-ad73-61eb4a8a461d-kube-api-access-g6m9n\") pod \"cilium-qvd8b\" (UID: \"d7fadd08-16d4-4255-ad73-61eb4a8a461d\") " pod="kube-system/cilium-qvd8b" Nov 5 16:04:26.504700 systemd[1]: Started sshd@26-172.238.168.232:22-139.178.89.65:44018.service - OpenSSH per-connection server daemon (139.178.89.65:44018). Nov 5 16:04:26.727405 kubelet[2792]: E1105 16:04:26.727251 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:26.728539 containerd[1607]: time="2025-11-05T16:04:26.728479826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvd8b,Uid:d7fadd08-16d4-4255-ad73-61eb4a8a461d,Namespace:kube-system,Attempt:0,}" Nov 5 16:04:26.749132 containerd[1607]: time="2025-11-05T16:04:26.749078210Z" level=info msg="connecting to shim df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b" address="unix:///run/containerd/s/a39c408aca28923590194892a3fc7300ddd60ab39d837da586def0634ff48da4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 16:04:26.778171 systemd[1]: Started cri-containerd-df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b.scope - libcontainer container df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b. Nov 5 16:04:26.821801 containerd[1607]: time="2025-11-05T16:04:26.821730879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvd8b,Uid:d7fadd08-16d4-4255-ad73-61eb4a8a461d,Namespace:kube-system,Attempt:0,} returns sandbox id \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\"" Nov 5 16:04:26.823002 kubelet[2792]: E1105 16:04:26.822907 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:26.830717 containerd[1607]: time="2025-11-05T16:04:26.830677736Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 16:04:26.837316 containerd[1607]: time="2025-11-05T16:04:26.837286026Z" level=info msg="Container a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:26.841110 containerd[1607]: time="2025-11-05T16:04:26.841079607Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0\"" Nov 5 16:04:26.841958 containerd[1607]: time="2025-11-05T16:04:26.841825389Z" level=info msg="StartContainer for \"a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0\"" Nov 5 16:04:26.843147 containerd[1607]: time="2025-11-05T16:04:26.843067776Z" level=info msg="connecting to shim a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0" address="unix:///run/containerd/s/a39c408aca28923590194892a3fc7300ddd60ab39d837da586def0634ff48da4" protocol=ttrpc version=3 Nov 5 16:04:26.874451 sshd[4577]: Accepted publickey for core from 139.178.89.65 port 44018 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:26.878371 systemd[1]: Started cri-containerd-a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0.scope - libcontainer container a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0. Nov 5 16:04:26.878941 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:26.892775 systemd-logind[1592]: New session 27 of user core. Nov 5 16:04:26.897061 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 16:04:26.929956 containerd[1607]: time="2025-11-05T16:04:26.929864147Z" level=info msg="StartContainer for \"a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0\" returns successfully" Nov 5 16:04:26.941566 systemd[1]: cri-containerd-a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0.scope: Deactivated successfully. Nov 5 16:04:26.945429 containerd[1607]: time="2025-11-05T16:04:26.945373925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0\" id:\"a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0\" pid:4640 exited_at:{seconds:1762358666 nanos:944890950}" Nov 5 16:04:26.945619 containerd[1607]: time="2025-11-05T16:04:26.945389925Z" level=info msg="received exit event container_id:\"a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0\" id:\"a12e68faed787bce37cdc8ea95c7cbab5c7514489e1fac60569e5d44e7999ea0\" pid:4640 exited_at:{seconds:1762358666 nanos:944890950}" Nov 5 16:04:27.135237 sshd[4646]: Connection closed by 139.178.89.65 port 44018 Nov 5 16:04:27.136479 sshd-session[4577]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:27.142437 systemd[1]: sshd@26-172.238.168.232:22-139.178.89.65:44018.service: Deactivated successfully. Nov 5 16:04:27.145175 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 16:04:27.147127 systemd-logind[1592]: Session 27 logged out. Waiting for processes to exit. Nov 5 16:04:27.148626 systemd-logind[1592]: Removed session 27. Nov 5 16:04:27.201192 systemd[1]: Started sshd@27-172.238.168.232:22-139.178.89.65:44024.service - OpenSSH per-connection server daemon (139.178.89.65:44024). Nov 5 16:04:27.555045 sshd[4680]: Accepted publickey for core from 139.178.89.65 port 44024 ssh2: RSA SHA256:QS+LCGHtJSOGkygHIzRq0CEqHcfGVZLkLvsIZWiUKYY Nov 5 16:04:27.557499 kubelet[2792]: E1105 16:04:27.556033 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:27.562241 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 16:04:27.567086 containerd[1607]: time="2025-11-05T16:04:27.565893439Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 16:04:27.586250 systemd-logind[1592]: New session 28 of user core. Nov 5 16:04:27.588110 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 5 16:04:27.598263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3101482055.mount: Deactivated successfully. Nov 5 16:04:27.603859 containerd[1607]: time="2025-11-05T16:04:27.603792256Z" level=info msg="Container 9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:27.613801 containerd[1607]: time="2025-11-05T16:04:27.613751963Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c\"" Nov 5 16:04:27.614812 containerd[1607]: time="2025-11-05T16:04:27.614787462Z" level=info msg="StartContainer for \"9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c\"" Nov 5 16:04:27.616233 containerd[1607]: time="2025-11-05T16:04:27.616194547Z" level=info msg="connecting to shim 9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c" address="unix:///run/containerd/s/a39c408aca28923590194892a3fc7300ddd60ab39d837da586def0634ff48da4" protocol=ttrpc version=3 Nov 5 16:04:27.644076 systemd[1]: Started cri-containerd-9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c.scope - libcontainer container 9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c. Nov 5 16:04:27.688950 containerd[1607]: time="2025-11-05T16:04:27.688844054Z" level=info msg="StartContainer for \"9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c\" returns successfully" Nov 5 16:04:27.697533 systemd[1]: cri-containerd-9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c.scope: Deactivated successfully. Nov 5 16:04:27.699365 containerd[1607]: time="2025-11-05T16:04:27.699146017Z" level=info msg="received exit event container_id:\"9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c\" id:\"9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c\" pid:4696 exited_at:{seconds:1762358667 nanos:698966819}" Nov 5 16:04:27.699365 containerd[1607]: time="2025-11-05T16:04:27.699341985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c\" id:\"9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c\" pid:4696 exited_at:{seconds:1762358667 nanos:698966819}" Nov 5 16:04:28.563679 kubelet[2792]: E1105 16:04:28.563611 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:28.574670 containerd[1607]: time="2025-11-05T16:04:28.573190189Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 16:04:28.574318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9366b4e94e34b8176739e28f8e26b4e759165d467107229b0243f5844d04f19c-rootfs.mount: Deactivated successfully. Nov 5 16:04:28.603946 containerd[1607]: time="2025-11-05T16:04:28.597478089Z" level=info msg="Container c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:28.609461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1751555569.mount: Deactivated successfully. Nov 5 16:04:28.614129 containerd[1607]: time="2025-11-05T16:04:28.614081648Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b\"" Nov 5 16:04:28.616276 containerd[1607]: time="2025-11-05T16:04:28.616251306Z" level=info msg="StartContainer for \"c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b\"" Nov 5 16:04:28.618948 containerd[1607]: time="2025-11-05T16:04:28.618455853Z" level=info msg="connecting to shim c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b" address="unix:///run/containerd/s/a39c408aca28923590194892a3fc7300ddd60ab39d837da586def0634ff48da4" protocol=ttrpc version=3 Nov 5 16:04:28.650075 systemd[1]: Started cri-containerd-c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b.scope - libcontainer container c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b. Nov 5 16:04:28.703363 containerd[1607]: time="2025-11-05T16:04:28.703315580Z" level=info msg="StartContainer for \"c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b\" returns successfully" Nov 5 16:04:28.707376 systemd[1]: cri-containerd-c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b.scope: Deactivated successfully. Nov 5 16:04:28.709255 containerd[1607]: time="2025-11-05T16:04:28.708727504Z" level=info msg="received exit event container_id:\"c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b\" id:\"c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b\" pid:4747 exited_at:{seconds:1762358668 nanos:708135481}" Nov 5 16:04:28.709255 containerd[1607]: time="2025-11-05T16:04:28.709173430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b\" id:\"c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b\" pid:4747 exited_at:{seconds:1762358668 nanos:708135481}" Nov 5 16:04:29.567370 kubelet[2792]: E1105 16:04:29.567293 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:29.574536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c435f0ec85224ff462637dcb08c6ec9f92d1b957b6ffe039fa7389c1a05cf79b-rootfs.mount: Deactivated successfully. Nov 5 16:04:29.576093 containerd[1607]: time="2025-11-05T16:04:29.574646499Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 16:04:29.586884 containerd[1607]: time="2025-11-05T16:04:29.586845025Z" level=info msg="Container d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:29.594418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376482027.mount: Deactivated successfully. Nov 5 16:04:29.599129 containerd[1607]: time="2025-11-05T16:04:29.599073880Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f\"" Nov 5 16:04:29.599645 containerd[1607]: time="2025-11-05T16:04:29.599611625Z" level=info msg="StartContainer for \"d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f\"" Nov 5 16:04:29.600938 containerd[1607]: time="2025-11-05T16:04:29.600870132Z" level=info msg="connecting to shim d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f" address="unix:///run/containerd/s/a39c408aca28923590194892a3fc7300ddd60ab39d837da586def0634ff48da4" protocol=ttrpc version=3 Nov 5 16:04:29.625224 systemd[1]: Started cri-containerd-d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f.scope - libcontainer container d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f. Nov 5 16:04:29.664995 systemd[1]: cri-containerd-d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f.scope: Deactivated successfully. Nov 5 16:04:29.665708 containerd[1607]: time="2025-11-05T16:04:29.665684521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f\" id:\"d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f\" pid:4784 exited_at:{seconds:1762358669 nanos:665294835}" Nov 5 16:04:29.666604 containerd[1607]: time="2025-11-05T16:04:29.666560442Z" level=info msg="received exit event container_id:\"d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f\" id:\"d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f\" pid:4784 exited_at:{seconds:1762358669 nanos:665294835}" Nov 5 16:04:29.682848 containerd[1607]: time="2025-11-05T16:04:29.682823656Z" level=info msg="StartContainer for \"d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f\" returns successfully" Nov 5 16:04:29.704036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2074f47104be11922324178d2504cf388d5ccc79d8ff8c368743243e72f057f-rootfs.mount: Deactivated successfully. Nov 5 16:04:30.577595 kubelet[2792]: E1105 16:04:30.575617 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:30.586769 containerd[1607]: time="2025-11-05T16:04:30.586648213Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 16:04:30.611941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279641463.mount: Deactivated successfully. Nov 5 16:04:30.614956 containerd[1607]: time="2025-11-05T16:04:30.613417422Z" level=info msg="Container ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80: CDI devices from CRI Config.CDIDevices: []" Nov 5 16:04:30.620397 containerd[1607]: time="2025-11-05T16:04:30.620356272Z" level=info msg="CreateContainer within sandbox \"df98d203663727087bfde7b194f5d6963b14cec85e0913a6ab856b324958601b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\"" Nov 5 16:04:30.621136 containerd[1607]: time="2025-11-05T16:04:30.621007795Z" level=info msg="StartContainer for \"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\"" Nov 5 16:04:30.622542 containerd[1607]: time="2025-11-05T16:04:30.622503960Z" level=info msg="connecting to shim ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80" address="unix:///run/containerd/s/a39c408aca28923590194892a3fc7300ddd60ab39d837da586def0634ff48da4" protocol=ttrpc version=3 Nov 5 16:04:30.652185 systemd[1]: Started cri-containerd-ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80.scope - libcontainer container ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80. Nov 5 16:04:30.705428 containerd[1607]: time="2025-11-05T16:04:30.705375853Z" level=info msg="StartContainer for \"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\" returns successfully" Nov 5 16:04:30.706954 kubelet[2792]: I1105 16:04:30.706874 2792 setters.go:618] "Node became not ready" node="172-238-168-232" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-05T16:04:30Z","lastTransitionTime":"2025-11-05T16:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 5 16:04:30.793692 containerd[1607]: time="2025-11-05T16:04:30.793555111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\" id:\"c76cc40854aeb09752847ef7919f91de28d2bb533c28782e3161f312b40eec27\" pid:4849 exited_at:{seconds:1762358670 nanos:792826929}" Nov 5 16:04:31.308037 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 5 16:04:31.593656 kubelet[2792]: E1105 16:04:31.593353 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:31.611945 kubelet[2792]: I1105 16:04:31.611758 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qvd8b" podStartSLOduration=5.611741724 podStartE2EDuration="5.611741724s" podCreationTimestamp="2025-11-05 16:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 16:04:31.61121955 +0000 UTC m=+265.920608422" watchObservedRunningTime="2025-11-05 16:04:31.611741724 +0000 UTC m=+265.921130596" Nov 5 16:04:31.987406 containerd[1607]: time="2025-11-05T16:04:31.987313431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\" id:\"8cfb5478237e526680113c2b9c4b30dd1187215fcf4f167c89459c21731b2167\" pid:4958 exit_status:1 exited_at:{seconds:1762358671 nanos:986356050}" Nov 5 16:04:32.728902 kubelet[2792]: E1105 16:04:32.728666 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:34.244563 containerd[1607]: time="2025-11-05T16:04:34.244189692Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\" id:\"a6dfcb1797eddb2827215dfc5f9ffae0269d1d20890342e2cce9ec8d4c8823dd\" pid:5324 exit_status:1 exited_at:{seconds:1762358674 nanos:243424520}" Nov 5 16:04:34.428208 systemd-networkd[1516]: lxc_health: Link UP Nov 5 16:04:34.448689 systemd-networkd[1516]: lxc_health: Gained carrier Nov 5 16:04:34.730709 kubelet[2792]: E1105 16:04:34.730661 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:35.605577 kubelet[2792]: E1105 16:04:35.604167 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:35.687663 systemd-networkd[1516]: lxc_health: Gained IPv6LL Nov 5 16:04:36.488533 containerd[1607]: time="2025-11-05T16:04:36.488484071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\" id:\"a112d3ad620fb1480ecaa0c2ee32122ab0f98c844c10833a23e9e22a8556e77a\" pid:5431 exited_at:{seconds:1762358676 nanos:486123464}" Nov 5 16:04:36.606282 kubelet[2792]: E1105 16:04:36.606228 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Nov 5 16:04:39.023951 containerd[1607]: time="2025-11-05T16:04:39.023885469Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\" id:\"4b7d6937df55ccebc7941dbfed9c77cbdf65e1a029f3fff5746e8e4f7d3ddf9a\" pid:5462 exited_at:{seconds:1762358679 nanos:23073436}" Nov 5 16:04:41.327451 containerd[1607]: time="2025-11-05T16:04:41.327399366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac73056075d8f2a63186d2c75bc93c07d8f55d87d1589e38fa74242e21a81c80\" id:\"600ea2db1794a0d909802bec8a13ef83f60f479825955898ac24479b4066dc90\" pid:5487 exited_at:{seconds:1762358681 nanos:325621262}" Nov 5 16:04:41.471228 sshd[4683]: Connection closed by 139.178.89.65 port 44024 Nov 5 16:04:41.471866 sshd-session[4680]: pam_unix(sshd:session): session closed for user core Nov 5 16:04:41.476845 systemd[1]: sshd@27-172.238.168.232:22-139.178.89.65:44024.service: Deactivated successfully. Nov 5 16:04:41.480106 systemd[1]: session-28.scope: Deactivated successfully. Nov 5 16:04:41.484450 systemd-logind[1592]: Session 28 logged out. Waiting for processes to exit. Nov 5 16:04:41.486502 systemd-logind[1592]: Removed session 28.