Nov 5 00:19:22.329086 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 5 00:19:22.329111 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:19:22.329121 kernel: BIOS-provided physical RAM map: Nov 5 00:19:22.329127 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 5 00:19:22.329134 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 5 00:19:22.329142 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 00:19:22.329149 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 5 00:19:22.329155 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 5 00:19:22.329162 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 5 00:19:22.329168 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 5 00:19:22.329174 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 00:19:22.329181 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 00:19:22.329187 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 5 00:19:22.329195 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 00:19:22.329203 kernel: NX (Execute Disable) protection: active Nov 5 00:19:22.329210 kernel: APIC: Static calls initialized Nov 5 00:19:22.329216 kernel: SMBIOS 2.8 present. Nov 5 00:19:22.329223 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 5 00:19:22.329232 kernel: DMI: Memory slots populated: 1/1 Nov 5 00:19:22.329239 kernel: Hypervisor detected: KVM Nov 5 00:19:22.329245 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 5 00:19:22.329252 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 00:19:22.329259 kernel: kvm-clock: using sched offset of 6178061222 cycles Nov 5 00:19:22.329266 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 00:19:22.329273 kernel: tsc: Detected 1999.999 MHz processor Nov 5 00:19:22.329281 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 00:19:22.329288 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 00:19:22.329297 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 5 00:19:22.329305 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 00:19:22.329312 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 00:19:22.329319 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 5 00:19:22.329325 kernel: Using GB pages for direct mapping Nov 5 00:19:22.329332 kernel: ACPI: Early table checksum verification disabled Nov 5 00:19:22.329339 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 5 00:19:22.329348 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:19:22.329356 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:19:22.329363 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:19:22.329370 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 5 00:19:22.329377 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:19:22.329384 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:19:22.329396 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:19:22.329403 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 00:19:22.329411 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 5 00:19:22.329418 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 5 00:19:22.329425 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 5 00:19:22.329435 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 5 00:19:22.329442 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 5 00:19:22.329449 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 5 00:19:22.329457 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 5 00:19:22.329464 kernel: No NUMA configuration found Nov 5 00:19:22.329471 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 5 00:19:22.329478 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Nov 5 00:19:22.329488 kernel: Zone ranges: Nov 5 00:19:22.329495 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 00:19:22.329502 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 5 00:19:22.329509 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 5 00:19:22.329517 kernel: Device empty Nov 5 00:19:22.329524 kernel: Movable zone start for each node Nov 5 00:19:22.329531 kernel: Early memory node ranges Nov 5 00:19:22.329538 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 00:19:22.329547 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 5 00:19:22.329554 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 5 00:19:22.329562 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 5 00:19:22.329569 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 00:19:22.329576 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 00:19:22.329583 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 5 00:19:22.329591 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 00:19:22.329598 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 00:19:22.329607 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 00:19:22.329614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 00:19:22.329622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 00:19:22.329629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 00:19:22.329636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 00:19:22.329643 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 00:19:22.329650 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 00:19:22.330697 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 00:19:22.330706 kernel: TSC deadline timer available Nov 5 00:19:22.330714 kernel: CPU topo: Max. logical packages: 1 Nov 5 00:19:22.330843 kernel: CPU topo: Max. logical dies: 1 Nov 5 00:19:22.330852 kernel: CPU topo: Max. dies per package: 1 Nov 5 00:19:22.330877 kernel: CPU topo: Max. threads per core: 1 Nov 5 00:19:22.330890 kernel: CPU topo: Num. cores per package: 2 Nov 5 00:19:22.330902 kernel: CPU topo: Num. threads per package: 2 Nov 5 00:19:22.330909 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 00:19:22.330917 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 00:19:22.330924 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 00:19:22.330931 kernel: kvm-guest: setup PV sched yield Nov 5 00:19:22.330939 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 5 00:19:22.330946 kernel: Booting paravirtualized kernel on KVM Nov 5 00:19:22.330953 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 00:19:22.330963 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 00:19:22.330971 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 00:19:22.330978 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 00:19:22.330985 kernel: pcpu-alloc: [0] 0 1 Nov 5 00:19:22.330992 kernel: kvm-guest: PV spinlocks enabled Nov 5 00:19:22.331000 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 00:19:22.331008 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:19:22.331018 kernel: random: crng init done Nov 5 00:19:22.331025 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 00:19:22.331033 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 00:19:22.331040 kernel: Fallback order for Node 0: 0 Nov 5 00:19:22.331047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Nov 5 00:19:22.331055 kernel: Policy zone: Normal Nov 5 00:19:22.331064 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 00:19:22.331071 kernel: software IO TLB: area num 2. Nov 5 00:19:22.331078 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 00:19:22.331085 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 00:19:22.331093 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 00:19:22.331100 kernel: Dynamic Preempt: voluntary Nov 5 00:19:22.331107 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 00:19:22.331115 kernel: rcu: RCU event tracing is enabled. Nov 5 00:19:22.331125 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 00:19:22.331132 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 00:19:22.331139 kernel: Rude variant of Tasks RCU enabled. Nov 5 00:19:22.331147 kernel: Tracing variant of Tasks RCU enabled. Nov 5 00:19:22.331154 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 00:19:22.331161 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 00:19:22.331168 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 00:19:22.331753 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 00:19:22.331764 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 00:19:22.331775 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 00:19:22.331782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 00:19:22.331790 kernel: Console: colour VGA+ 80x25 Nov 5 00:19:22.331798 kernel: printk: legacy console [tty0] enabled Nov 5 00:19:22.331805 kernel: printk: legacy console [ttyS0] enabled Nov 5 00:19:22.331813 kernel: ACPI: Core revision 20240827 Nov 5 00:19:22.331823 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 00:19:22.331830 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 00:19:22.331838 kernel: x2apic enabled Nov 5 00:19:22.331846 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 00:19:22.331853 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 00:19:22.331863 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 00:19:22.331870 kernel: kvm-guest: setup PV IPIs Nov 5 00:19:22.331878 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 00:19:22.331886 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Nov 5 00:19:22.331893 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Nov 5 00:19:22.331901 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 00:19:22.331908 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 00:19:22.331918 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 00:19:22.331925 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 00:19:22.331933 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 00:19:22.331941 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 00:19:22.331948 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 5 00:19:22.331956 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 00:19:22.331963 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 00:19:22.331973 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 00:19:22.331981 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 00:19:22.331989 kernel: active return thunk: srso_alias_return_thunk Nov 5 00:19:22.331997 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 00:19:22.332004 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 5 00:19:22.332012 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 00:19:22.332019 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 00:19:22.332029 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 00:19:22.332036 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 00:19:22.332044 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 5 00:19:22.332052 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 00:19:22.332059 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 5 00:19:22.332067 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 5 00:19:22.332074 kernel: Freeing SMP alternatives memory: 32K Nov 5 00:19:22.332084 kernel: pid_max: default: 32768 minimum: 301 Nov 5 00:19:22.332091 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 00:19:22.332099 kernel: landlock: Up and running. Nov 5 00:19:22.332106 kernel: SELinux: Initializing. Nov 5 00:19:22.332114 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 00:19:22.332312 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 00:19:22.332320 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 5 00:19:22.332329 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 00:19:22.332337 kernel: ... version: 0 Nov 5 00:19:22.332344 kernel: ... bit width: 48 Nov 5 00:19:22.332352 kernel: ... generic registers: 6 Nov 5 00:19:22.332359 kernel: ... value mask: 0000ffffffffffff Nov 5 00:19:22.332367 kernel: ... max period: 00007fffffffffff Nov 5 00:19:22.332374 kernel: ... fixed-purpose events: 0 Nov 5 00:19:22.332383 kernel: ... event mask: 000000000000003f Nov 5 00:19:22.332391 kernel: signal: max sigframe size: 3376 Nov 5 00:19:22.332398 kernel: rcu: Hierarchical SRCU implementation. Nov 5 00:19:22.332406 kernel: rcu: Max phase no-delay instances is 400. Nov 5 00:19:22.332414 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 00:19:22.332421 kernel: smp: Bringing up secondary CPUs ... Nov 5 00:19:22.332429 kernel: smpboot: x86: Booting SMP configuration: Nov 5 00:19:22.332436 kernel: .... node #0, CPUs: #1 Nov 5 00:19:22.332446 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 00:19:22.332453 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Nov 5 00:19:22.332461 kernel: Memory: 3984332K/4193772K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 204764K reserved, 0K cma-reserved) Nov 5 00:19:22.332468 kernel: devtmpfs: initialized Nov 5 00:19:22.332476 kernel: x86/mm: Memory block size: 128MB Nov 5 00:19:22.332484 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 00:19:22.332491 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 00:19:22.332501 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 00:19:22.332508 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 00:19:22.332516 kernel: audit: initializing netlink subsys (disabled) Nov 5 00:19:22.332524 kernel: audit: type=2000 audit(1762301959.619:1): state=initialized audit_enabled=0 res=1 Nov 5 00:19:22.332531 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 00:19:22.332538 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 00:19:22.332546 kernel: cpuidle: using governor menu Nov 5 00:19:22.332555 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 00:19:22.332563 kernel: dca service started, version 1.12.1 Nov 5 00:19:22.332571 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 5 00:19:22.332578 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 5 00:19:22.332586 kernel: PCI: Using configuration type 1 for base access Nov 5 00:19:22.332593 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 00:19:22.332601 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 00:19:22.332610 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 00:19:22.332618 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 00:19:22.332625 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 00:19:22.332633 kernel: ACPI: Added _OSI(Module Device) Nov 5 00:19:22.332640 kernel: ACPI: Added _OSI(Processor Device) Nov 5 00:19:22.332648 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 00:19:22.332689 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 00:19:22.332704 kernel: ACPI: Interpreter enabled Nov 5 00:19:22.332712 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 00:19:22.332720 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 00:19:22.332728 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 00:19:22.332736 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 00:19:22.332743 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 00:19:22.332751 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 00:19:22.334817 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 00:19:22.335018 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 00:19:22.335464 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 00:19:22.335480 kernel: PCI host bridge to bus 0000:00 Nov 5 00:19:22.335749 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 00:19:22.335938 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 00:19:22.336155 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 00:19:22.336517 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 5 00:19:22.336727 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 00:19:22.336904 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 5 00:19:22.337068 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 00:19:22.337444 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 00:19:22.337631 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 00:19:22.337847 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 5 00:19:22.338030 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 5 00:19:22.338370 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 5 00:19:22.338545 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 00:19:22.338771 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 5 00:19:22.339098 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Nov 5 00:19:22.339607 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 5 00:19:22.340026 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 5 00:19:22.340307 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 00:19:22.340540 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Nov 5 00:19:22.340890 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 5 00:19:22.341078 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 5 00:19:22.341255 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 5 00:19:22.341633 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 00:19:22.341903 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 00:19:22.342103 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 00:19:22.342447 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Nov 5 00:19:22.342623 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Nov 5 00:19:22.342872 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 00:19:22.343055 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 5 00:19:22.343071 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 00:19:22.343079 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 00:19:22.343087 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 00:19:22.343094 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 00:19:22.343102 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 00:19:22.343110 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 00:19:22.343117 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 00:19:22.343275 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 00:19:22.343282 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 00:19:22.343290 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 00:19:22.343297 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 00:19:22.343305 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 00:19:22.343312 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 00:19:22.343320 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 00:19:22.343330 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 00:19:22.343337 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 00:19:22.343345 kernel: iommu: Default domain type: Translated Nov 5 00:19:22.343352 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 00:19:22.343360 kernel: PCI: Using ACPI for IRQ routing Nov 5 00:19:22.343367 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 00:19:22.343375 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 5 00:19:22.343385 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 5 00:19:22.343561 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 00:19:22.343770 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 00:19:22.343954 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 00:19:22.343964 kernel: vgaarb: loaded Nov 5 00:19:22.343973 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 00:19:22.343980 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 00:19:22.343992 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 00:19:22.343999 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 00:19:22.344007 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 00:19:22.344015 kernel: pnp: PnP ACPI init Nov 5 00:19:22.344202 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 5 00:19:22.344214 kernel: pnp: PnP ACPI: found 5 devices Nov 5 00:19:22.344222 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 00:19:22.344233 kernel: NET: Registered PF_INET protocol family Nov 5 00:19:22.344438 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 00:19:22.344447 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 00:19:22.344454 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 00:19:22.344462 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 00:19:22.344470 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 00:19:22.344479 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 00:19:22.344487 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 00:19:22.344495 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 00:19:22.344502 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 00:19:22.344510 kernel: NET: Registered PF_XDP protocol family Nov 5 00:19:22.344700 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 00:19:22.344876 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 00:19:22.345044 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 00:19:22.345330 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 5 00:19:22.345489 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 5 00:19:22.345748 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 5 00:19:22.345763 kernel: PCI: CLS 0 bytes, default 64 Nov 5 00:19:22.345771 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 5 00:19:22.345779 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 5 00:19:22.345791 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Nov 5 00:19:22.345799 kernel: Initialise system trusted keyrings Nov 5 00:19:22.345806 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 00:19:22.345814 kernel: Key type asymmetric registered Nov 5 00:19:22.345821 kernel: Asymmetric key parser 'x509' registered Nov 5 00:19:22.345829 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 00:19:22.345836 kernel: io scheduler mq-deadline registered Nov 5 00:19:22.345846 kernel: io scheduler kyber registered Nov 5 00:19:22.345854 kernel: io scheduler bfq registered Nov 5 00:19:22.345861 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 00:19:22.345869 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 00:19:22.345877 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 00:19:22.345885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 00:19:22.345892 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 00:19:22.345902 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 00:19:22.345910 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 00:19:22.345917 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 00:19:22.345925 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 00:19:22.346110 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 00:19:22.346279 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 00:19:22.346445 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T00:19:20 UTC (1762301960) Nov 5 00:19:22.346617 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 5 00:19:22.346627 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 00:19:22.346635 kernel: NET: Registered PF_INET6 protocol family Nov 5 00:19:22.346643 kernel: Segment Routing with IPv6 Nov 5 00:19:22.346651 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 00:19:22.346683 kernel: NET: Registered PF_PACKET protocol family Nov 5 00:19:22.346692 kernel: Key type dns_resolver registered Nov 5 00:19:22.346704 kernel: IPI shorthand broadcast: enabled Nov 5 00:19:22.346712 kernel: sched_clock: Marking stable (1321005460, 381596267)->(1845442982, -142841255) Nov 5 00:19:22.346720 kernel: registered taskstats version 1 Nov 5 00:19:22.346728 kernel: Loading compiled-in X.509 certificates Nov 5 00:19:22.346736 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 5 00:19:22.346743 kernel: Demotion targets for Node 0: null Nov 5 00:19:22.346751 kernel: Key type .fscrypt registered Nov 5 00:19:22.346761 kernel: Key type fscrypt-provisioning registered Nov 5 00:19:22.346769 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 00:19:22.346777 kernel: ima: Allocated hash algorithm: sha1 Nov 5 00:19:22.346784 kernel: ima: No architecture policies found Nov 5 00:19:22.346792 kernel: clk: Disabling unused clocks Nov 5 00:19:22.346800 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 5 00:19:22.346808 kernel: Write protecting the kernel read-only data: 40960k Nov 5 00:19:22.346817 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 00:19:22.346825 kernel: Run /init as init process Nov 5 00:19:22.346833 kernel: with arguments: Nov 5 00:19:22.346841 kernel: /init Nov 5 00:19:22.346849 kernel: with environment: Nov 5 00:19:22.346857 kernel: HOME=/ Nov 5 00:19:22.346878 kernel: TERM=linux Nov 5 00:19:22.346890 kernel: SCSI subsystem initialized Nov 5 00:19:22.346898 kernel: libata version 3.00 loaded. Nov 5 00:19:22.347093 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 00:19:22.347105 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 00:19:22.347413 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 00:19:22.347589 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 00:19:22.347829 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 00:19:22.348031 kernel: scsi host0: ahci Nov 5 00:19:22.348221 kernel: scsi host1: ahci Nov 5 00:19:22.348574 kernel: scsi host2: ahci Nov 5 00:19:22.348805 kernel: scsi host3: ahci Nov 5 00:19:22.349003 kernel: scsi host4: ahci Nov 5 00:19:22.349390 kernel: scsi host5: ahci Nov 5 00:19:22.349403 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Nov 5 00:19:22.349411 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Nov 5 00:19:22.349419 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Nov 5 00:19:22.349427 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Nov 5 00:19:22.349435 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Nov 5 00:19:22.349446 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Nov 5 00:19:22.349454 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 00:19:22.349462 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 00:19:22.349469 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 00:19:22.349477 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 5 00:19:22.349485 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 00:19:22.349493 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 00:19:22.350519 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Nov 5 00:19:22.350774 kernel: scsi host6: Virtio SCSI HBA Nov 5 00:19:22.350990 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 5 00:19:22.351191 kernel: sd 6:0:0:0: Power-on or device reset occurred Nov 5 00:19:22.351546 kernel: sd 6:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 5 00:19:22.351821 kernel: sd 6:0:0:0: [sda] Write Protect is off Nov 5 00:19:22.352136 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 5 00:19:22.352693 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 5 00:19:22.352716 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 00:19:22.352727 kernel: GPT:25804799 != 167739391 Nov 5 00:19:22.352735 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 00:19:22.352744 kernel: GPT:25804799 != 167739391 Nov 5 00:19:22.352756 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 00:19:22.352764 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 00:19:22.353305 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Nov 5 00:19:22.353323 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 00:19:22.353331 kernel: device-mapper: uevent: version 1.0.3 Nov 5 00:19:22.353340 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 00:19:22.353352 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 00:19:22.353360 kernel: raid6: avx2x4 gen() 23699 MB/s Nov 5 00:19:22.353371 kernel: raid6: avx2x2 gen() 23635 MB/s Nov 5 00:19:22.353378 kernel: raid6: avx2x1 gen() 15038 MB/s Nov 5 00:19:22.353386 kernel: raid6: using algorithm avx2x4 gen() 23699 MB/s Nov 5 00:19:22.353397 kernel: raid6: .... xor() 3242 MB/s, rmw enabled Nov 5 00:19:22.353405 kernel: raid6: using avx2x2 recovery algorithm Nov 5 00:19:22.353413 kernel: xor: automatically using best checksumming function avx Nov 5 00:19:22.353421 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 00:19:22.353429 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (167) Nov 5 00:19:22.353437 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 5 00:19:22.353446 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:19:22.353456 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 00:19:22.353464 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 00:19:22.353472 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 00:19:22.353480 kernel: loop: module loaded Nov 5 00:19:22.353488 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 00:19:22.353496 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 00:19:22.353505 systemd[1]: Successfully made /usr/ read-only. Nov 5 00:19:22.353518 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 00:19:22.353527 systemd[1]: Detected virtualization kvm. Nov 5 00:19:22.355761 systemd[1]: Detected architecture x86-64. Nov 5 00:19:22.355772 systemd[1]: Running in initrd. Nov 5 00:19:22.355781 systemd[1]: No hostname configured, using default hostname. Nov 5 00:19:22.355790 systemd[1]: Hostname set to . Nov 5 00:19:22.355802 systemd[1]: Initializing machine ID from random generator. Nov 5 00:19:22.355811 systemd[1]: Queued start job for default target initrd.target. Nov 5 00:19:22.355819 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 00:19:22.355828 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:19:22.355837 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:19:22.355846 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 00:19:22.355856 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 00:19:22.355865 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 00:19:22.355874 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 00:19:22.355882 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:19:22.355890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:19:22.355899 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 00:19:22.355909 systemd[1]: Reached target paths.target - Path Units. Nov 5 00:19:22.355917 systemd[1]: Reached target slices.target - Slice Units. Nov 5 00:19:22.355925 systemd[1]: Reached target swap.target - Swaps. Nov 5 00:19:22.355934 systemd[1]: Reached target timers.target - Timer Units. Nov 5 00:19:22.355942 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 00:19:22.355950 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 00:19:22.355958 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 00:19:22.355969 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 00:19:22.355977 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:19:22.355985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 00:19:22.355993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:19:22.356002 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 00:19:22.356010 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 00:19:22.356318 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 00:19:22.356336 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 00:19:22.356345 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 00:19:22.356354 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 00:19:22.356362 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 00:19:22.356371 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 00:19:22.356383 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 00:19:22.356393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:19:22.356402 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 00:19:22.356411 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:19:22.356419 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 00:19:22.356430 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 00:19:22.356464 systemd-journald[303]: Collecting audit messages is disabled. Nov 5 00:19:22.356486 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 00:19:22.356497 kernel: Bridge firewalling registered Nov 5 00:19:22.356505 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 00:19:22.356514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:19:22.356522 systemd-journald[303]: Journal started Nov 5 00:19:22.356539 systemd-journald[303]: Runtime Journal (/run/log/journal/712f7c7190444045a774a271bcae69d3) is 8M, max 78.2M, 70.2M free. Nov 5 00:19:22.336036 systemd-modules-load[304]: Inserted module 'br_netfilter' Nov 5 00:19:22.367776 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 00:19:22.376886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 00:19:22.382774 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 00:19:22.389153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 00:19:22.397463 systemd-tmpfiles[320]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 00:19:22.481870 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:19:22.484726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:19:22.487041 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:19:22.491402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 00:19:22.495752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 00:19:22.504255 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:19:22.520833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 00:19:22.525792 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 00:19:22.557146 systemd-resolved[330]: Positive Trust Anchors: Nov 5 00:19:22.557157 systemd-resolved[330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 00:19:22.557161 systemd-resolved[330]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 00:19:22.562199 dracut-cmdline[343]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 5 00:19:22.557188 systemd-resolved[330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 00:19:22.590769 systemd-resolved[330]: Defaulting to hostname 'linux'. Nov 5 00:19:22.592875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 00:19:22.594932 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:19:22.661760 kernel: Loading iSCSI transport class v2.0-870. Nov 5 00:19:22.677718 kernel: iscsi: registered transport (tcp) Nov 5 00:19:22.700711 kernel: iscsi: registered transport (qla4xxx) Nov 5 00:19:22.700741 kernel: QLogic iSCSI HBA Driver Nov 5 00:19:22.726804 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 00:19:22.745857 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:19:22.751053 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 00:19:22.796330 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 00:19:22.798633 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 00:19:22.802833 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 00:19:22.830988 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 00:19:22.835806 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:19:22.866879 systemd-udevd[578]: Using default interface naming scheme 'v257'. Nov 5 00:19:22.880490 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:19:22.885868 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 00:19:22.918720 dracut-pre-trigger[647]: rd.md=0: removing MD RAID activation Nov 5 00:19:22.924917 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 00:19:22.946970 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 00:19:22.953067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 00:19:22.958383 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 00:19:22.998569 systemd-networkd[708]: lo: Link UP Nov 5 00:19:22.998576 systemd-networkd[708]: lo: Gained carrier Nov 5 00:19:23.001140 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 00:19:23.002058 systemd[1]: Reached target network.target - Network. Nov 5 00:19:23.084530 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:19:23.088505 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 00:19:23.201461 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 5 00:19:23.212482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 00:19:23.223450 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 5 00:19:23.236123 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 5 00:19:23.239814 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 00:19:23.243795 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 00:19:23.266767 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 00:19:23.272859 disk-uuid[755]: Primary Header is updated. Nov 5 00:19:23.272859 disk-uuid[755]: Secondary Entries is updated. Nov 5 00:19:23.272859 disk-uuid[755]: Secondary Header is updated. Nov 5 00:19:23.436697 kernel: AES CTR mode by8 optimization enabled Nov 5 00:19:23.468722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 00:19:23.468853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:19:23.470885 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:19:23.485106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:19:23.559866 systemd-networkd[708]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:19:23.561056 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 00:19:23.562434 systemd-networkd[708]: eth0: Link UP Nov 5 00:19:23.562703 systemd-networkd[708]: eth0: Gained carrier Nov 5 00:19:23.562715 systemd-networkd[708]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:19:23.609574 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 00:19:23.665722 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:19:23.668984 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 00:19:23.671302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:19:23.673342 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 00:19:23.677146 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 00:19:23.699523 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 00:19:24.289745 systemd-networkd[708]: eth0: DHCPv4 address 172.234.219.54/24, gateway 172.234.219.1 acquired from 23.33.176.76 Nov 5 00:19:24.480981 disk-uuid[756]: Warning: The kernel is still using the old partition table. Nov 5 00:19:24.480981 disk-uuid[756]: The new table will be used at the next reboot or after you Nov 5 00:19:24.480981 disk-uuid[756]: run partprobe(8) or kpartx(8) Nov 5 00:19:24.480981 disk-uuid[756]: The operation has completed successfully. Nov 5 00:19:24.492092 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 00:19:24.492257 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 00:19:24.494251 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 00:19:24.551686 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (848) Nov 5 00:19:24.557477 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:19:24.557509 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:19:24.568703 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 00:19:24.568733 kernel: BTRFS info (device sda6): turning on async discard Nov 5 00:19:24.568749 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 00:19:24.581720 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:19:24.582026 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 00:19:24.585064 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 00:19:24.727148 ignition[867]: Ignition 2.22.0 Nov 5 00:19:24.727171 ignition[867]: Stage: fetch-offline Nov 5 00:19:24.727222 ignition[867]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:19:24.727236 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:19:24.729925 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 00:19:24.727533 ignition[867]: parsed url from cmdline: "" Nov 5 00:19:24.733848 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 00:19:24.727537 ignition[867]: no config URL provided Nov 5 00:19:24.727543 ignition[867]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 00:19:24.727554 ignition[867]: no config at "/usr/lib/ignition/user.ign" Nov 5 00:19:24.727559 ignition[867]: failed to fetch config: resource requires networking Nov 5 00:19:24.727962 ignition[867]: Ignition finished successfully Nov 5 00:19:24.766235 ignition[873]: Ignition 2.22.0 Nov 5 00:19:24.766256 ignition[873]: Stage: fetch Nov 5 00:19:24.766373 ignition[873]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:19:24.766383 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:19:24.766461 ignition[873]: parsed url from cmdline: "" Nov 5 00:19:24.766465 ignition[873]: no config URL provided Nov 5 00:19:24.766470 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 00:19:24.766478 ignition[873]: no config at "/usr/lib/ignition/user.ign" Nov 5 00:19:24.766500 ignition[873]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 5 00:19:24.860580 ignition[873]: PUT result: OK Nov 5 00:19:24.861696 ignition[873]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 5 00:19:24.913835 systemd-networkd[708]: eth0: Gained IPv6LL Nov 5 00:19:24.974798 ignition[873]: GET result: OK Nov 5 00:19:24.974905 ignition[873]: parsing config with SHA512: 2d79eccd479849db8342e7841ed8ac0231b96e07b17d2ebeb81f3b6bc6b5ba2010838394388def4eeb93a792b898c44b870d0d56a9d5054ac43804708b19ce0f Nov 5 00:19:24.978959 unknown[873]: fetched base config from "system" Nov 5 00:19:24.978976 unknown[873]: fetched base config from "system" Nov 5 00:19:24.979399 ignition[873]: fetch: fetch complete Nov 5 00:19:24.978983 unknown[873]: fetched user config from "akamai" Nov 5 00:19:24.979404 ignition[873]: fetch: fetch passed Nov 5 00:19:24.982220 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 00:19:24.979447 ignition[873]: Ignition finished successfully Nov 5 00:19:24.986821 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 00:19:25.017331 ignition[879]: Ignition 2.22.0 Nov 5 00:19:25.017351 ignition[879]: Stage: kargs Nov 5 00:19:25.017477 ignition[879]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:19:25.017487 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:19:25.021736 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 00:19:25.018080 ignition[879]: kargs: kargs passed Nov 5 00:19:25.018120 ignition[879]: Ignition finished successfully Nov 5 00:19:25.026810 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 00:19:25.058143 ignition[886]: Ignition 2.22.0 Nov 5 00:19:25.058166 ignition[886]: Stage: disks Nov 5 00:19:25.058318 ignition[886]: no configs at "/usr/lib/ignition/base.d" Nov 5 00:19:25.058329 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:19:25.059130 ignition[886]: disks: disks passed Nov 5 00:19:25.061781 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 00:19:25.059174 ignition[886]: Ignition finished successfully Nov 5 00:19:25.064220 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 00:19:25.086987 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 00:19:25.089012 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 00:19:25.091094 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 00:19:25.093360 systemd[1]: Reached target basic.target - Basic System. Nov 5 00:19:25.097791 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 00:19:25.137905 systemd-fsck[894]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 00:19:25.141115 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 00:19:25.145939 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 00:19:25.276695 kernel: EXT4-fs (sda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 5 00:19:25.277594 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 00:19:25.279123 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 00:19:25.281813 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 00:19:25.285763 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 00:19:25.287774 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 00:19:25.289494 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 00:19:25.290769 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 00:19:25.297959 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 00:19:25.301842 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 00:19:25.311698 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (902) Nov 5 00:19:25.316777 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:19:25.316810 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:19:25.330836 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 00:19:25.330864 kernel: BTRFS info (device sda6): turning on async discard Nov 5 00:19:25.330877 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 00:19:25.334313 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 00:19:25.386814 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 00:19:25.392650 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Nov 5 00:19:25.399554 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 00:19:25.404680 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 00:19:25.526795 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 00:19:25.529598 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 00:19:25.533791 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 00:19:25.548378 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 00:19:25.553745 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:19:25.568152 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 00:19:25.586159 ignition[1016]: INFO : Ignition 2.22.0 Nov 5 00:19:25.586159 ignition[1016]: INFO : Stage: mount Nov 5 00:19:25.586159 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:19:25.586159 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:19:25.586159 ignition[1016]: INFO : mount: mount passed Nov 5 00:19:25.586159 ignition[1016]: INFO : Ignition finished successfully Nov 5 00:19:25.588081 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 00:19:25.591372 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 00:19:25.610393 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 00:19:25.636701 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1026) Nov 5 00:19:25.641920 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 5 00:19:25.641955 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 00:19:25.654149 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 00:19:25.654274 kernel: BTRFS info (device sda6): turning on async discard Nov 5 00:19:25.654289 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 00:19:25.659234 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 00:19:25.701708 ignition[1042]: INFO : Ignition 2.22.0 Nov 5 00:19:25.701708 ignition[1042]: INFO : Stage: files Nov 5 00:19:25.704021 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:19:25.704021 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:19:25.704021 ignition[1042]: DEBUG : files: compiled without relabeling support, skipping Nov 5 00:19:25.704021 ignition[1042]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 00:19:25.704021 ignition[1042]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 00:19:25.710528 ignition[1042]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 00:19:25.710528 ignition[1042]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 00:19:25.710528 ignition[1042]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 00:19:25.708842 unknown[1042]: wrote ssh authorized keys file for user: core Nov 5 00:19:25.715759 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 00:19:25.715759 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 00:19:25.810487 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 00:19:25.860948 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 00:19:25.862931 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 00:19:25.862931 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 5 00:19:26.118351 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 5 00:19:26.241747 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 00:19:26.241747 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 5 00:19:26.241747 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 00:19:26.241747 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 00:19:26.241747 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 00:19:26.241747 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 00:19:26.241747 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 00:19:26.241747 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 00:19:26.254373 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 00:19:26.254373 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 00:19:26.254373 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 00:19:26.254373 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 00:19:26.254373 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 00:19:26.254373 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 00:19:26.254373 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 00:19:26.493019 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 5 00:19:27.086913 ignition[1042]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 00:19:27.086913 ignition[1042]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 00:19:27.112982 ignition[1042]: INFO : files: files passed Nov 5 00:19:27.112982 ignition[1042]: INFO : Ignition finished successfully Nov 5 00:19:27.098085 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 00:19:27.116797 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 00:19:27.121770 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 00:19:27.135559 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 00:19:27.135724 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 00:19:27.144888 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:19:27.144888 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:19:27.149132 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 00:19:27.151654 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 00:19:27.154184 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 00:19:27.155944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 00:19:27.201074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 00:19:27.201212 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 00:19:27.203542 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 00:19:27.205354 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 00:19:27.209000 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 00:19:27.210795 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 00:19:27.234436 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 00:19:27.237029 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 00:19:27.259089 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 00:19:27.259221 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:19:27.260558 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:19:27.262878 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 00:19:27.264833 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 00:19:27.264975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 00:19:27.267646 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 00:19:27.269033 systemd[1]: Stopped target basic.target - Basic System. Nov 5 00:19:27.270943 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 00:19:27.272878 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 00:19:27.274783 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 00:19:27.276626 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 00:19:27.278995 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 00:19:27.281069 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 00:19:27.283394 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 00:19:27.285635 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 00:19:27.287595 systemd[1]: Stopped target swap.target - Swaps. Nov 5 00:19:27.289498 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 00:19:27.289636 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 00:19:27.292048 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:19:27.293289 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:19:27.295015 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 00:19:27.295853 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:19:27.297931 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 00:19:27.298066 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 00:19:27.300674 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 00:19:27.300792 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 00:19:27.301983 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 00:19:27.302119 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 00:19:27.305748 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 00:19:27.307601 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 00:19:27.307755 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:19:27.311744 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 00:19:27.312986 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 00:19:27.313103 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:19:27.316423 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 00:19:27.316523 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:19:27.317495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 00:19:27.317593 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 00:19:27.327100 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 00:19:27.327238 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 00:19:27.349269 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 00:19:27.353216 ignition[1099]: INFO : Ignition 2.22.0 Nov 5 00:19:27.353216 ignition[1099]: INFO : Stage: umount Nov 5 00:19:27.353216 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 00:19:27.353216 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 5 00:19:27.353216 ignition[1099]: INFO : umount: umount passed Nov 5 00:19:27.353216 ignition[1099]: INFO : Ignition finished successfully Nov 5 00:19:27.357990 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 00:19:27.358119 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 00:19:27.359812 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 00:19:27.359910 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 00:19:27.383270 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 00:19:27.383357 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 00:19:27.384470 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 00:19:27.384520 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 00:19:27.386148 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 00:19:27.386200 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 00:19:27.388008 systemd[1]: Stopped target network.target - Network. Nov 5 00:19:27.389635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 00:19:27.389721 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 00:19:27.391458 systemd[1]: Stopped target paths.target - Path Units. Nov 5 00:19:27.393473 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 00:19:27.393966 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:19:27.395514 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 00:19:27.397476 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 00:19:27.399410 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 00:19:27.399457 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 00:19:27.401173 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 00:19:27.401216 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 00:19:27.403021 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 00:19:27.403072 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 00:19:27.404741 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 00:19:27.404789 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 00:19:27.406695 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 00:19:27.406754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 00:19:27.409057 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 00:19:27.411009 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 00:19:27.421602 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 00:19:27.421883 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 00:19:27.431930 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 00:19:27.432330 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 00:19:27.439412 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 00:19:27.441563 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 00:19:27.441618 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:19:27.445899 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 00:19:27.449386 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 00:19:27.449460 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 00:19:27.451245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 00:19:27.451302 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:19:27.454005 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 00:19:27.454056 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 00:19:27.456215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:19:27.479795 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 00:19:27.479974 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:19:27.483097 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 00:19:27.483365 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 00:19:27.485995 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 00:19:27.486037 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:19:27.490906 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 00:19:27.490964 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 00:19:27.492099 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 00:19:27.492153 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 00:19:27.493080 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 00:19:27.493135 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 00:19:27.495801 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 00:19:27.498848 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 00:19:27.498909 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:19:27.500773 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 00:19:27.500827 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:19:27.502160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 00:19:27.502210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:19:27.507695 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 00:19:27.507828 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 00:19:27.513895 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 00:19:27.514037 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 00:19:27.515879 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 00:19:27.519845 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 00:19:27.537354 systemd[1]: Switching root. Nov 5 00:19:27.575687 systemd-journald[303]: Journal stopped Nov 5 00:19:28.839936 systemd-journald[303]: Received SIGTERM from PID 1 (systemd). Nov 5 00:19:28.839966 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 00:19:28.839980 kernel: SELinux: policy capability open_perms=1 Nov 5 00:19:28.839991 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 00:19:28.840003 kernel: SELinux: policy capability always_check_network=0 Nov 5 00:19:28.840012 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 00:19:28.840023 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 00:19:28.840032 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 00:19:28.840042 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 00:19:28.840051 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 00:19:28.840063 kernel: audit: type=1403 audit(1762301967.727:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 00:19:28.840074 systemd[1]: Successfully loaded SELinux policy in 80.701ms. Nov 5 00:19:28.840085 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.734ms. Nov 5 00:19:28.840097 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 00:19:28.840110 systemd[1]: Detected virtualization kvm. Nov 5 00:19:28.840120 systemd[1]: Detected architecture x86-64. Nov 5 00:19:28.840131 systemd[1]: Detected first boot. Nov 5 00:19:28.840141 systemd[1]: Initializing machine ID from random generator. Nov 5 00:19:28.840152 zram_generator::config[1143]: No configuration found. Nov 5 00:19:28.840346 kernel: Guest personality initialized and is inactive Nov 5 00:19:28.840356 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 00:19:28.840366 kernel: Initialized host personality Nov 5 00:19:28.840376 kernel: NET: Registered PF_VSOCK protocol family Nov 5 00:19:28.840386 systemd[1]: Populated /etc with preset unit settings. Nov 5 00:19:28.840397 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 00:19:28.840409 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 00:19:28.840420 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 00:19:28.840431 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 00:19:28.840442 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 00:19:28.840452 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 00:19:28.840463 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 00:19:28.840476 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 00:19:28.840487 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 00:19:28.840498 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 00:19:28.840508 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 00:19:28.840519 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 00:19:28.840529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 00:19:28.840540 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 00:19:28.840552 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 00:19:28.840563 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 00:19:28.840577 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 00:19:28.840587 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 00:19:28.840598 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 00:19:28.840609 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 00:19:28.840622 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 00:19:28.840634 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 00:19:28.840647 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 00:19:28.840681 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 00:19:28.840699 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 00:19:28.840711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 00:19:28.840726 systemd[1]: Reached target slices.target - Slice Units. Nov 5 00:19:28.840736 systemd[1]: Reached target swap.target - Swaps. Nov 5 00:19:28.840834 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 00:19:28.840850 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 00:19:28.840861 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 00:19:28.840872 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 00:19:28.840887 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 00:19:28.840898 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 00:19:28.840909 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 00:19:28.840919 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 00:19:28.840930 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 00:19:28.840943 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 00:19:28.840954 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:19:28.840965 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 00:19:28.840976 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 00:19:28.840987 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 00:19:28.841000 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 00:19:28.841013 systemd[1]: Reached target machines.target - Containers. Nov 5 00:19:28.841023 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 00:19:28.841034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 00:19:28.841558 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 00:19:28.841574 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 00:19:28.841585 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 00:19:28.841596 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 00:19:28.841611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 00:19:28.841622 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 00:19:28.841633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 00:19:28.841644 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 00:19:28.841674 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 00:19:28.841696 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 00:19:28.841708 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 00:19:28.841723 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 00:19:28.841735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 00:19:28.841746 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 00:19:28.841757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 00:19:28.841768 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 00:19:28.841779 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 00:19:28.841793 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 00:19:28.841804 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 00:19:28.841815 kernel: fuse: init (API version 7.41) Nov 5 00:19:28.841826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:19:28.841837 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 00:19:28.841848 kernel: ACPI: bus type drm_connector registered Nov 5 00:19:28.841858 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 00:19:28.841871 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 00:19:28.841903 systemd-journald[1234]: Collecting audit messages is disabled. Nov 5 00:19:28.841925 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 00:19:28.841938 systemd-journald[1234]: Journal started Nov 5 00:19:28.841957 systemd-journald[1234]: Runtime Journal (/run/log/journal/45838be9d61c4f22b288586969beee4d) is 8M, max 78.2M, 70.2M free. Nov 5 00:19:28.427634 systemd[1]: Queued start job for default target multi-user.target. Nov 5 00:19:28.443632 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 00:19:28.444491 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 00:19:28.846928 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 00:19:28.850010 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 00:19:28.852688 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 00:19:28.853836 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 00:19:28.855187 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 00:19:28.858017 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 00:19:28.858239 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 00:19:28.859699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 00:19:28.859924 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 00:19:28.862144 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 00:19:28.862801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 00:19:28.865034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 00:19:28.865229 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 00:19:28.867407 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 00:19:28.867600 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 00:19:28.870008 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 00:19:28.870219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 00:19:28.872484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 00:19:28.874001 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 00:19:28.876173 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 00:19:28.877641 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 00:19:28.893070 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 00:19:28.894468 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 00:19:28.896764 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 00:19:28.900755 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 00:19:28.902728 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 00:19:28.902811 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 00:19:28.905278 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 00:19:28.906595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 00:19:28.912794 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 00:19:28.918336 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 00:19:28.920774 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 00:19:28.923623 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 00:19:28.925175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 00:19:28.928795 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:19:28.933856 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 00:19:28.944352 systemd-journald[1234]: Time spent on flushing to /var/log/journal/45838be9d61c4f22b288586969beee4d is 28.744ms for 984 entries. Nov 5 00:19:28.944352 systemd-journald[1234]: System Journal (/var/log/journal/45838be9d61c4f22b288586969beee4d) is 8M, max 588.1M, 580.1M free. Nov 5 00:19:29.012157 systemd-journald[1234]: Received client request to flush runtime journal. Nov 5 00:19:29.012205 kernel: loop1: detected capacity change from 0 to 110984 Nov 5 00:19:28.940566 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 00:19:28.944446 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 00:19:28.949828 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 00:19:28.955407 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 00:19:28.956882 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 00:19:28.961799 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 00:19:28.988145 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 00:19:29.000962 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:19:29.014801 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 00:19:29.022715 kernel: loop2: detected capacity change from 0 to 229808 Nov 5 00:19:29.025230 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 00:19:29.042204 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 00:19:29.045881 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 00:19:29.049802 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 00:19:29.067793 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 00:19:29.071805 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 00:19:29.091434 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 5 00:19:29.091460 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Nov 5 00:19:29.097817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 00:19:29.108714 kernel: loop4: detected capacity change from 0 to 8 Nov 5 00:19:29.130683 kernel: loop5: detected capacity change from 0 to 110984 Nov 5 00:19:29.135785 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 00:19:29.157691 kernel: loop6: detected capacity change from 0 to 229808 Nov 5 00:19:29.180680 kernel: loop7: detected capacity change from 0 to 128048 Nov 5 00:19:29.201687 kernel: loop1: detected capacity change from 0 to 8 Nov 5 00:19:29.207053 (sd-merge)[1293]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-akamai.raw'. Nov 5 00:19:29.213902 (sd-merge)[1293]: Merged extensions into '/usr'. Nov 5 00:19:29.221108 systemd[1]: Reload requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 00:19:29.221132 systemd[1]: Reloading... Nov 5 00:19:29.253360 systemd-resolved[1286]: Positive Trust Anchors: Nov 5 00:19:29.253631 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 00:19:29.253754 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 00:19:29.253829 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 00:19:29.260512 systemd-resolved[1286]: Defaulting to hostname 'linux'. Nov 5 00:19:29.301799 zram_generator::config[1324]: No configuration found. Nov 5 00:19:29.513092 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 00:19:29.513461 systemd[1]: Reloading finished in 291 ms. Nov 5 00:19:29.545735 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 00:19:29.547242 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 00:19:29.548944 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 00:19:29.554561 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 00:19:29.563158 systemd[1]: Starting ensure-sysext.service... Nov 5 00:19:29.565810 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 00:19:29.570879 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 00:19:29.591793 systemd[1]: Reload requested from client PID 1370 ('systemctl') (unit ensure-sysext.service)... Nov 5 00:19:29.591807 systemd[1]: Reloading... Nov 5 00:19:29.608601 systemd-udevd[1372]: Using default interface naming scheme 'v257'. Nov 5 00:19:29.609339 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 00:19:29.609525 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 00:19:29.609870 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 00:19:29.610128 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 00:19:29.612249 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 00:19:29.612565 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Nov 5 00:19:29.612640 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Nov 5 00:19:29.626620 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 00:19:29.626640 systemd-tmpfiles[1371]: Skipping /boot Nov 5 00:19:29.646782 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 00:19:29.646797 systemd-tmpfiles[1371]: Skipping /boot Nov 5 00:19:29.679686 zram_generator::config[1405]: No configuration found. Nov 5 00:19:29.836689 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 00:19:29.859688 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 00:19:29.865734 kernel: ACPI: button: Power Button [PWRF] Nov 5 00:19:29.888677 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 00:19:29.888983 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 00:19:29.953345 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 00:19:29.953879 systemd[1]: Reloading finished in 361 ms. Nov 5 00:19:29.963875 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 00:19:29.978941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 00:19:30.037682 kernel: EDAC MC: Ver: 3.0.0 Nov 5 00:19:30.070283 systemd[1]: Finished ensure-sysext.service. Nov 5 00:19:30.088155 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:19:30.089802 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 00:19:30.094818 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 00:19:30.097845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 00:19:30.099905 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 00:19:30.107194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 00:19:30.110878 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 00:19:30.117829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 00:19:30.122944 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 00:19:30.125026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 00:19:30.125067 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 00:19:30.130452 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 00:19:30.137230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 00:19:30.145493 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 00:19:30.152952 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 00:19:30.154248 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 00:19:30.155291 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 00:19:30.155501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 00:19:30.158027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 00:19:30.158231 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 00:19:30.173571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 00:19:30.176368 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 00:19:30.176559 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 00:19:30.179780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 00:19:30.180474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 00:19:30.188796 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 00:19:30.197048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 00:19:30.245674 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 00:19:30.258579 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 00:19:30.262126 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 00:19:30.315922 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 00:19:30.323917 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 00:19:30.347805 augenrules[1538]: No rules Nov 5 00:19:30.353903 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 00:19:30.354151 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 00:19:30.371085 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 00:19:30.372383 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 00:19:30.379143 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 00:19:30.381274 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 00:19:30.387273 systemd-networkd[1503]: lo: Link UP Nov 5 00:19:30.387519 systemd-networkd[1503]: lo: Gained carrier Nov 5 00:19:30.389612 systemd-networkd[1503]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:19:30.389715 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 00:19:30.389996 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 00:19:30.391727 systemd-networkd[1503]: eth0: Link UP Nov 5 00:19:30.391909 systemd[1]: Reached target network.target - Network. Nov 5 00:19:30.392271 systemd-networkd[1503]: eth0: Gained carrier Nov 5 00:19:30.392381 systemd-networkd[1503]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 00:19:30.397647 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 00:19:30.399694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 00:19:30.448068 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 00:19:30.516735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 00:19:30.761310 ldconfig[1494]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 00:19:30.766341 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 00:19:30.769520 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 00:19:30.794020 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 00:19:30.795276 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 00:19:30.796380 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 00:19:30.797400 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 00:19:30.798398 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 00:19:30.799740 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 00:19:30.800756 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 00:19:30.801734 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 00:19:30.802698 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 00:19:30.802746 systemd[1]: Reached target paths.target - Path Units. Nov 5 00:19:30.803556 systemd[1]: Reached target timers.target - Timer Units. Nov 5 00:19:30.805848 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 00:19:30.808521 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 00:19:30.811934 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 00:19:30.813034 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 00:19:30.814024 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 00:19:30.817389 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 00:19:30.818892 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 00:19:30.820406 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 00:19:30.822046 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 00:19:30.822878 systemd[1]: Reached target basic.target - Basic System. Nov 5 00:19:30.823742 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 00:19:30.823792 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 00:19:30.824864 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 00:19:30.827817 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 00:19:30.830960 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 00:19:30.837095 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 00:19:30.842319 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 00:19:30.847908 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 00:19:30.848847 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 00:19:30.854565 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 00:19:30.860370 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 00:19:30.870783 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 00:19:30.879471 jq[1563]: false Nov 5 00:19:30.881278 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 00:19:30.885882 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 00:19:30.888727 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Nov 5 00:19:30.889015 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Nov 5 00:19:30.893855 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Nov 5 00:19:30.895715 oslogin_cache_refresh[1565]: Failure getting users, quitting Nov 5 00:19:30.896796 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 00:19:30.896796 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Nov 5 00:19:30.896796 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Nov 5 00:19:30.896796 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 00:19:30.895740 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 00:19:30.895785 oslogin_cache_refresh[1565]: Refreshing group entry cache Nov 5 00:19:30.896234 oslogin_cache_refresh[1565]: Failure getting groups, quitting Nov 5 00:19:30.896245 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 00:19:30.899857 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 00:19:30.901548 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 00:19:30.902969 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 00:19:30.903611 extend-filesystems[1564]: Found /dev/sda6 Nov 5 00:19:30.905339 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 00:19:30.909050 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 00:19:30.921850 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 00:19:30.930643 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 00:19:30.931527 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 00:19:30.932945 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 00:19:30.933246 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 00:19:30.939031 extend-filesystems[1564]: Found /dev/sda9 Nov 5 00:19:30.945904 jq[1578]: true Nov 5 00:19:30.957778 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 00:19:30.958551 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 00:19:30.965703 extend-filesystems[1564]: Checking size of /dev/sda9 Nov 5 00:19:30.976680 jq[1598]: true Nov 5 00:19:30.994056 tar[1588]: linux-amd64/LICENSE Nov 5 00:19:30.994056 tar[1588]: linux-amd64/helm Nov 5 00:19:31.008020 (ntainerd)[1603]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 00:19:31.008390 extend-filesystems[1564]: Resized partition /dev/sda9 Nov 5 00:19:31.013140 update_engine[1576]: I20251105 00:19:31.008150 1576 main.cc:92] Flatcar Update Engine starting Nov 5 00:19:31.017323 coreos-metadata[1560]: Nov 05 00:19:31.016 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 5 00:19:31.026477 extend-filesystems[1616]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 00:19:31.035935 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19377147 blocks Nov 5 00:19:31.030280 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 00:19:31.031065 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 00:19:31.055501 dbus-daemon[1561]: [system] SELinux support is enabled Nov 5 00:19:31.055779 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 00:19:31.062293 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 00:19:31.063248 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 00:19:31.064319 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 00:19:31.064337 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 00:19:31.090713 systemd-networkd[1503]: eth0: DHCPv4 address 172.234.219.54/24, gateway 172.234.219.1 acquired from 23.33.176.76 Nov 5 00:19:31.092714 systemd[1]: Started update-engine.service - Update Engine. Nov 5 00:19:31.094450 update_engine[1576]: I20251105 00:19:31.094280 1576 update_check_scheduler.cc:74] Next update check in 6m53s Nov 5 00:19:31.095257 dbus-daemon[1561]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1503 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 00:19:31.095967 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Nov 5 00:19:31.097743 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 00:19:31.114601 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 00:19:31.178428 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 00:19:31.181698 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 00:19:31.182111 systemd-logind[1575]: New seat seat0. Nov 5 00:19:31.185822 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 00:19:31.994628 systemd-resolved[1286]: Clock change detected. Flushing caches. Nov 5 00:19:31.997211 systemd-timesyncd[1506]: Contacted time server 159.203.82.102:123 (3.flatcar.pool.ntp.org). Nov 5 00:19:31.997267 systemd-timesyncd[1506]: Initial clock synchronization to Wed 2025-11-05 00:19:31.993860 UTC. Nov 5 00:19:32.025235 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Nov 5 00:19:32.026787 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 00:19:32.034165 systemd[1]: Starting sshkeys.service... Nov 5 00:19:32.065612 kernel: EXT4-fs (sda9): resized filesystem to 19377147 Nov 5 00:19:32.098641 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 00:19:32.102516 extend-filesystems[1616]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 5 00:19:32.102516 extend-filesystems[1616]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 5 00:19:32.102516 extend-filesystems[1616]: The filesystem on /dev/sda9 is now 19377147 (4k) blocks long. Nov 5 00:19:32.211290 extend-filesystems[1564]: Resized filesystem in /dev/sda9 Nov 5 00:19:32.103006 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 00:19:32.214890 sshd_keygen[1587]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 00:19:32.172030 dbus-daemon[1561]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 00:19:32.114390 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 00:19:32.174642 dbus-daemon[1561]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1628 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 00:19:32.114654 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 00:19:32.211122 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 00:19:32.270320 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 00:19:32.311623 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 00:19:32.320350 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 00:19:32.322965 coreos-metadata[1645]: Nov 05 00:19:32.322 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 5 00:19:32.344900 containerd[1603]: time="2025-11-05T00:19:32Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 00:19:32.347660 containerd[1603]: time="2025-11-05T00:19:32.347472307Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 00:19:32.348696 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 00:19:32.348966 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 00:19:32.356372 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 00:19:32.385348 containerd[1603]: time="2025-11-05T00:19:32.385305076Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.34µs" Nov 5 00:19:32.385385 containerd[1603]: time="2025-11-05T00:19:32.385344846Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 00:19:32.385385 containerd[1603]: time="2025-11-05T00:19:32.385375006Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 00:19:32.385571 containerd[1603]: time="2025-11-05T00:19:32.385541036Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 00:19:32.385608 containerd[1603]: time="2025-11-05T00:19:32.385572996Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 00:19:32.385637 containerd[1603]: time="2025-11-05T00:19:32.385616646Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 00:19:32.385730 containerd[1603]: time="2025-11-05T00:19:32.385697866Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 00:19:32.385766 containerd[1603]: time="2025-11-05T00:19:32.385727176Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 00:19:32.385967 containerd[1603]: time="2025-11-05T00:19:32.385935626Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 00:19:32.386002 containerd[1603]: time="2025-11-05T00:19:32.385963786Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 00:19:32.386002 containerd[1603]: time="2025-11-05T00:19:32.385984476Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 00:19:32.386002 containerd[1603]: time="2025-11-05T00:19:32.385993736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 00:19:32.386397 containerd[1603]: time="2025-11-05T00:19:32.386092086Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 00:19:32.390663 containerd[1603]: time="2025-11-05T00:19:32.390627498Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 00:19:32.390699 containerd[1603]: time="2025-11-05T00:19:32.390684088Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 00:19:32.390722 containerd[1603]: time="2025-11-05T00:19:32.390701958Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 00:19:32.390786 containerd[1603]: time="2025-11-05T00:19:32.390752818Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 00:19:32.400647 containerd[1603]: time="2025-11-05T00:19:32.399345613Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 00:19:32.400647 containerd[1603]: time="2025-11-05T00:19:32.399427173Z" level=info msg="metadata content store policy set" policy=shared Nov 5 00:19:32.403608 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 00:19:32.404686 containerd[1603]: time="2025-11-05T00:19:32.404654015Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 00:19:32.405071 containerd[1603]: time="2025-11-05T00:19:32.405035155Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 00:19:32.405071 containerd[1603]: time="2025-11-05T00:19:32.405067295Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 00:19:32.405136 containerd[1603]: time="2025-11-05T00:19:32.405085945Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 00:19:32.405136 containerd[1603]: time="2025-11-05T00:19:32.405121945Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 00:19:32.405136 containerd[1603]: time="2025-11-05T00:19:32.405134945Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 00:19:32.405220 containerd[1603]: time="2025-11-05T00:19:32.405146685Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 00:19:32.405220 containerd[1603]: time="2025-11-05T00:19:32.405158445Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 00:19:32.405220 containerd[1603]: time="2025-11-05T00:19:32.405167915Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 00:19:32.405563 containerd[1603]: time="2025-11-05T00:19:32.405530356Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 00:19:32.405607 containerd[1603]: time="2025-11-05T00:19:32.405562446Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 00:19:32.405607 containerd[1603]: time="2025-11-05T00:19:32.405581026Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 00:19:32.408356 containerd[1603]: time="2025-11-05T00:19:32.408324697Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 00:19:32.408392 containerd[1603]: time="2025-11-05T00:19:32.408360447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 00:19:32.408446 containerd[1603]: time="2025-11-05T00:19:32.408414327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 00:19:32.409613 containerd[1603]: time="2025-11-05T00:19:32.408451497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 00:19:32.409613 containerd[1603]: time="2025-11-05T00:19:32.409498658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 00:19:32.409613 containerd[1603]: time="2025-11-05T00:19:32.409530538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 00:19:32.409613 containerd[1603]: time="2025-11-05T00:19:32.409586778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 00:19:32.409613 containerd[1603]: time="2025-11-05T00:19:32.409599938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 00:19:32.409613 containerd[1603]: time="2025-11-05T00:19:32.409611578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 00:19:32.409942 containerd[1603]: time="2025-11-05T00:19:32.409621698Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 00:19:32.409942 containerd[1603]: time="2025-11-05T00:19:32.409673548Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 00:19:32.410393 containerd[1603]: time="2025-11-05T00:19:32.410259768Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 00:19:32.410681 containerd[1603]: time="2025-11-05T00:19:32.410649968Z" level=info msg="Start snapshots syncer" Nov 5 00:19:32.411148 containerd[1603]: time="2025-11-05T00:19:32.410701498Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 00:19:32.411173 containerd[1603]: time="2025-11-05T00:19:32.411143078Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 00:19:32.413385 containerd[1603]: time="2025-11-05T00:19:32.413332360Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 00:19:32.413683 containerd[1603]: time="2025-11-05T00:19:32.413602880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 00:19:32.414523 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 00:19:32.417485 containerd[1603]: time="2025-11-05T00:19:32.416745621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418243542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418276532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418292282Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418305302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418323462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418333452Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418356202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418368512Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418382562Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418413622Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 00:19:32.418422 containerd[1603]: time="2025-11-05T00:19:32.418425222Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418433732Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418443602Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418450842Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418459452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418472522Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418488262Z" level=info msg="runtime interface created" Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418493422Z" level=info msg="created NRI interface" Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418501382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418512272Z" level=info msg="Connect containerd service" Nov 5 00:19:32.418627 containerd[1603]: time="2025-11-05T00:19:32.418535182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 00:19:32.419209 containerd[1603]: time="2025-11-05T00:19:32.419133802Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 00:19:32.426637 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 00:19:32.427719 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 00:19:32.469166 coreos-metadata[1645]: Nov 05 00:19:32.468 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 5 00:19:32.480287 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 00:19:32.500895 polkitd[1656]: Started polkitd version 126 Nov 5 00:19:32.511271 polkitd[1656]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 00:19:32.511578 polkitd[1656]: Loading rules from directory /run/polkit-1/rules.d Nov 5 00:19:32.511648 polkitd[1656]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 00:19:32.529488 polkitd[1656]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 00:19:32.529988 polkitd[1656]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 00:19:32.530035 polkitd[1656]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 00:19:32.531416 polkitd[1656]: Finished loading, compiling and executing 2 rules Nov 5 00:19:32.531714 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 00:19:32.533747 dbus-daemon[1561]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 00:19:32.535499 polkitd[1656]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 00:19:32.554797 systemd-hostnamed[1628]: Hostname set to <172-234-219-54> (transient) Nov 5 00:19:32.555264 systemd-resolved[1286]: System hostname changed to '172-234-219-54'. Nov 5 00:19:32.606595 containerd[1603]: time="2025-11-05T00:19:32.606388676Z" level=info msg="Start subscribing containerd event" Nov 5 00:19:32.606595 containerd[1603]: time="2025-11-05T00:19:32.606493986Z" level=info msg="Start recovering state" Nov 5 00:19:32.607264 containerd[1603]: time="2025-11-05T00:19:32.607248456Z" level=info msg="Start event monitor" Nov 5 00:19:32.607412 containerd[1603]: time="2025-11-05T00:19:32.607328066Z" level=info msg="Start cni network conf syncer for default" Nov 5 00:19:32.607412 containerd[1603]: time="2025-11-05T00:19:32.607340706Z" level=info msg="Start streaming server" Nov 5 00:19:32.607412 containerd[1603]: time="2025-11-05T00:19:32.607357726Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 00:19:32.607412 containerd[1603]: time="2025-11-05T00:19:32.607365156Z" level=info msg="runtime interface starting up..." Nov 5 00:19:32.607412 containerd[1603]: time="2025-11-05T00:19:32.607370876Z" level=info msg="starting plugins..." Nov 5 00:19:32.608031 containerd[1603]: time="2025-11-05T00:19:32.607980297Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 00:19:32.608508 coreos-metadata[1645]: Nov 05 00:19:32.608 INFO Fetch successful Nov 5 00:19:32.609256 containerd[1603]: time="2025-11-05T00:19:32.609225207Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 00:19:32.609659 containerd[1603]: time="2025-11-05T00:19:32.609457598Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 00:19:32.610481 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 00:19:32.611687 containerd[1603]: time="2025-11-05T00:19:32.611665569Z" level=info msg="containerd successfully booted in 0.267613s" Nov 5 00:19:32.620087 tar[1588]: linux-amd64/README.md Nov 5 00:19:32.637081 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 00:19:32.641242 update-ssh-keys[1695]: Updated "/home/core/.ssh/authorized_keys" Nov 5 00:19:32.640991 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 00:19:32.643547 systemd[1]: Finished sshkeys.service. Nov 5 00:19:32.788674 coreos-metadata[1560]: Nov 05 00:19:32.788 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 5 00:19:32.879781 coreos-metadata[1560]: Nov 05 00:19:32.879 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 5 00:19:32.970477 systemd-networkd[1503]: eth0: Gained IPv6LL Nov 5 00:19:32.973911 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 00:19:32.975580 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 00:19:32.978725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:19:32.983389 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 00:19:33.010759 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 00:19:33.061145 coreos-metadata[1560]: Nov 05 00:19:33.061 INFO Fetch successful Nov 5 00:19:33.061356 coreos-metadata[1560]: Nov 05 00:19:33.061 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 5 00:19:33.335734 coreos-metadata[1560]: Nov 05 00:19:33.335 INFO Fetch successful Nov 5 00:19:33.455780 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 00:19:33.457577 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 00:19:33.979789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:19:33.981783 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 00:19:33.983468 systemd[1]: Startup finished in 2.615s (kernel) + 5.832s (initrd) + 5.574s (userspace) = 14.021s. Nov 5 00:19:33.989395 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:19:34.567342 kubelet[1739]: E1105 00:19:34.567278 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:19:34.571151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:19:34.571395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:19:34.571803 systemd[1]: kubelet.service: Consumed 925ms CPU time, 267.1M memory peak. Nov 5 00:19:36.248365 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 00:19:36.249921 systemd[1]: Started sshd@0-172.234.219.54:22-139.178.68.195:33442.service - OpenSSH per-connection server daemon (139.178.68.195:33442). Nov 5 00:19:36.613164 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 33442 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:19:36.615661 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:19:36.626137 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 00:19:36.627729 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 00:19:36.636674 systemd-logind[1575]: New session 1 of user core. Nov 5 00:19:36.647280 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 00:19:36.650642 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 00:19:36.661628 (systemd)[1756]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 00:19:36.664276 systemd-logind[1575]: New session c1 of user core. Nov 5 00:19:36.796666 systemd[1756]: Queued start job for default target default.target. Nov 5 00:19:36.803356 systemd[1756]: Created slice app.slice - User Application Slice. Nov 5 00:19:36.803390 systemd[1756]: Reached target paths.target - Paths. Nov 5 00:19:36.803437 systemd[1756]: Reached target timers.target - Timers. Nov 5 00:19:36.804807 systemd[1756]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 00:19:36.814746 systemd[1756]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 00:19:36.814796 systemd[1756]: Reached target sockets.target - Sockets. Nov 5 00:19:36.814840 systemd[1756]: Reached target basic.target - Basic System. Nov 5 00:19:36.814886 systemd[1756]: Reached target default.target - Main User Target. Nov 5 00:19:36.814918 systemd[1756]: Startup finished in 144ms. Nov 5 00:19:36.815312 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 00:19:36.822327 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 00:19:37.093995 systemd[1]: Started sshd@1-172.234.219.54:22-139.178.68.195:33456.service - OpenSSH per-connection server daemon (139.178.68.195:33456). Nov 5 00:19:37.445592 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 33456 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:19:37.447540 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:19:37.453463 systemd-logind[1575]: New session 2 of user core. Nov 5 00:19:37.462320 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 00:19:37.700955 sshd[1770]: Connection closed by 139.178.68.195 port 33456 Nov 5 00:19:37.701449 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Nov 5 00:19:37.706002 systemd-logind[1575]: Session 2 logged out. Waiting for processes to exit. Nov 5 00:19:37.706511 systemd[1]: sshd@1-172.234.219.54:22-139.178.68.195:33456.service: Deactivated successfully. Nov 5 00:19:37.708663 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 00:19:37.710814 systemd-logind[1575]: Removed session 2. Nov 5 00:19:37.758406 systemd[1]: Started sshd@2-172.234.219.54:22-139.178.68.195:33460.service - OpenSSH per-connection server daemon (139.178.68.195:33460). Nov 5 00:19:38.104269 sshd[1776]: Accepted publickey for core from 139.178.68.195 port 33460 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:19:38.106096 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:19:38.111269 systemd-logind[1575]: New session 3 of user core. Nov 5 00:19:38.118305 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 00:19:38.350406 sshd[1779]: Connection closed by 139.178.68.195 port 33460 Nov 5 00:19:38.351167 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Nov 5 00:19:38.356243 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. Nov 5 00:19:38.356937 systemd[1]: sshd@2-172.234.219.54:22-139.178.68.195:33460.service: Deactivated successfully. Nov 5 00:19:38.359304 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 00:19:38.361148 systemd-logind[1575]: Removed session 3. Nov 5 00:19:38.411346 systemd[1]: Started sshd@3-172.234.219.54:22-139.178.68.195:33462.service - OpenSSH per-connection server daemon (139.178.68.195:33462). Nov 5 00:19:38.765994 sshd[1785]: Accepted publickey for core from 139.178.68.195 port 33462 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:19:38.767944 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:19:38.776267 systemd-logind[1575]: New session 4 of user core. Nov 5 00:19:38.781310 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 00:19:39.021893 sshd[1788]: Connection closed by 139.178.68.195 port 33462 Nov 5 00:19:39.023074 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Nov 5 00:19:39.028246 systemd[1]: sshd@3-172.234.219.54:22-139.178.68.195:33462.service: Deactivated successfully. Nov 5 00:19:39.030544 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 00:19:39.031545 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Nov 5 00:19:39.032948 systemd-logind[1575]: Removed session 4. Nov 5 00:19:39.094578 systemd[1]: Started sshd@4-172.234.219.54:22-139.178.68.195:33476.service - OpenSSH per-connection server daemon (139.178.68.195:33476). Nov 5 00:19:39.437384 sshd[1794]: Accepted publickey for core from 139.178.68.195 port 33476 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:19:39.439574 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:19:39.445728 systemd-logind[1575]: New session 5 of user core. Nov 5 00:19:39.451335 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 00:19:39.645295 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 00:19:39.645631 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:19:39.659335 sudo[1798]: pam_unix(sudo:session): session closed for user root Nov 5 00:19:39.710751 sshd[1797]: Connection closed by 139.178.68.195 port 33476 Nov 5 00:19:39.712021 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Nov 5 00:19:39.720295 systemd[1]: sshd@4-172.234.219.54:22-139.178.68.195:33476.service: Deactivated successfully. Nov 5 00:19:39.724563 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 00:19:39.726499 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Nov 5 00:19:39.730158 systemd-logind[1575]: Removed session 5. Nov 5 00:19:39.780029 systemd[1]: Started sshd@5-172.234.219.54:22-139.178.68.195:33482.service - OpenSSH per-connection server daemon (139.178.68.195:33482). Nov 5 00:19:40.151538 sshd[1804]: Accepted publickey for core from 139.178.68.195 port 33482 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:19:40.153871 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:19:40.160082 systemd-logind[1575]: New session 6 of user core. Nov 5 00:19:40.165311 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 00:19:40.361671 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 00:19:40.362005 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:19:40.366956 sudo[1809]: pam_unix(sudo:session): session closed for user root Nov 5 00:19:40.373486 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 00:19:40.373806 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:19:40.384951 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 00:19:40.432600 augenrules[1831]: No rules Nov 5 00:19:40.434546 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 00:19:40.435050 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 00:19:40.436683 sudo[1808]: pam_unix(sudo:session): session closed for user root Nov 5 00:19:40.492100 sshd[1807]: Connection closed by 139.178.68.195 port 33482 Nov 5 00:19:40.493240 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Nov 5 00:19:40.498806 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Nov 5 00:19:40.499523 systemd[1]: sshd@5-172.234.219.54:22-139.178.68.195:33482.service: Deactivated successfully. Nov 5 00:19:40.502380 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 00:19:40.504222 systemd-logind[1575]: Removed session 6. Nov 5 00:19:40.556713 systemd[1]: Started sshd@6-172.234.219.54:22-139.178.68.195:33494.service - OpenSSH per-connection server daemon (139.178.68.195:33494). Nov 5 00:19:40.923150 sshd[1840]: Accepted publickey for core from 139.178.68.195 port 33494 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:19:40.925363 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:19:40.931854 systemd-logind[1575]: New session 7 of user core. Nov 5 00:19:40.939465 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 00:19:41.135004 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 00:19:41.135398 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 00:19:41.485781 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 00:19:41.496532 (dockerd)[1862]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 00:19:41.728705 dockerd[1862]: time="2025-11-05T00:19:41.728634584Z" level=info msg="Starting up" Nov 5 00:19:41.729827 dockerd[1862]: time="2025-11-05T00:19:41.729796085Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 00:19:41.743255 dockerd[1862]: time="2025-11-05T00:19:41.742500381Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 00:19:41.760429 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1245213198-merged.mount: Deactivated successfully. Nov 5 00:19:41.792410 dockerd[1862]: time="2025-11-05T00:19:41.792254306Z" level=info msg="Loading containers: start." Nov 5 00:19:41.806227 kernel: Initializing XFRM netlink socket Nov 5 00:19:42.074947 systemd-networkd[1503]: docker0: Link UP Nov 5 00:19:42.078357 dockerd[1862]: time="2025-11-05T00:19:42.078328329Z" level=info msg="Loading containers: done." Nov 5 00:19:42.092070 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3769197203-merged.mount: Deactivated successfully. Nov 5 00:19:42.092578 dockerd[1862]: time="2025-11-05T00:19:42.092548566Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 00:19:42.092639 dockerd[1862]: time="2025-11-05T00:19:42.092609276Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 00:19:42.092718 dockerd[1862]: time="2025-11-05T00:19:42.092689606Z" level=info msg="Initializing buildkit" Nov 5 00:19:42.113160 dockerd[1862]: time="2025-11-05T00:19:42.113109906Z" level=info msg="Completed buildkit initialization" Nov 5 00:19:42.122102 dockerd[1862]: time="2025-11-05T00:19:42.122030871Z" level=info msg="Daemon has completed initialization" Nov 5 00:19:42.122291 dockerd[1862]: time="2025-11-05T00:19:42.122241841Z" level=info msg="API listen on /run/docker.sock" Nov 5 00:19:42.123624 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 00:19:42.804327 containerd[1603]: time="2025-11-05T00:19:42.804259681Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 00:19:43.580335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount168294880.mount: Deactivated successfully. Nov 5 00:19:44.822463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 00:19:44.825362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:19:44.964567 containerd[1603]: time="2025-11-05T00:19:44.964522831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:44.966328 containerd[1603]: time="2025-11-05T00:19:44.966286082Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 5 00:19:44.968790 containerd[1603]: time="2025-11-05T00:19:44.968737073Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:44.972752 containerd[1603]: time="2025-11-05T00:19:44.972390055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:44.973298 containerd[1603]: time="2025-11-05T00:19:44.973263555Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.168961514s" Nov 5 00:19:44.973345 containerd[1603]: time="2025-11-05T00:19:44.973302295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 00:19:44.974047 containerd[1603]: time="2025-11-05T00:19:44.974018336Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 00:19:45.012433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:19:45.018910 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:19:45.063528 kubelet[2138]: E1105 00:19:45.063464 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:19:45.069706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:19:45.069919 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:19:45.070554 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.7M memory peak. Nov 5 00:19:46.298243 containerd[1603]: time="2025-11-05T00:19:46.297344157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:46.298243 containerd[1603]: time="2025-11-05T00:19:46.298164267Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 5 00:19:46.298904 containerd[1603]: time="2025-11-05T00:19:46.298861038Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:46.300765 containerd[1603]: time="2025-11-05T00:19:46.300742578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:46.301758 containerd[1603]: time="2025-11-05T00:19:46.301723399Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.327674473s" Nov 5 00:19:46.301806 containerd[1603]: time="2025-11-05T00:19:46.301757889Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 00:19:46.302320 containerd[1603]: time="2025-11-05T00:19:46.302285479Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 00:19:47.470248 containerd[1603]: time="2025-11-05T00:19:47.469940683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:47.471559 containerd[1603]: time="2025-11-05T00:19:47.471295173Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 5 00:19:47.472379 containerd[1603]: time="2025-11-05T00:19:47.472347624Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:47.474812 containerd[1603]: time="2025-11-05T00:19:47.474772385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:47.475869 containerd[1603]: time="2025-11-05T00:19:47.475837926Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.173517127s" Nov 5 00:19:47.475957 containerd[1603]: time="2025-11-05T00:19:47.475943856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 00:19:47.477262 containerd[1603]: time="2025-11-05T00:19:47.477223986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 00:19:48.682339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266580508.mount: Deactivated successfully. Nov 5 00:19:49.075713 containerd[1603]: time="2025-11-05T00:19:49.075588255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:49.076510 containerd[1603]: time="2025-11-05T00:19:49.076385935Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 5 00:19:49.077209 containerd[1603]: time="2025-11-05T00:19:49.077146906Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:49.079234 containerd[1603]: time="2025-11-05T00:19:49.078868117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:49.079996 containerd[1603]: time="2025-11-05T00:19:49.079375277Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.602085571s" Nov 5 00:19:49.079996 containerd[1603]: time="2025-11-05T00:19:49.079410107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 00:19:49.079996 containerd[1603]: time="2025-11-05T00:19:49.079828847Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 00:19:49.726111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287500878.mount: Deactivated successfully. Nov 5 00:19:50.464553 containerd[1603]: time="2025-11-05T00:19:50.463701088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:50.464553 containerd[1603]: time="2025-11-05T00:19:50.464526019Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 5 00:19:50.465084 containerd[1603]: time="2025-11-05T00:19:50.465062439Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:50.467034 containerd[1603]: time="2025-11-05T00:19:50.467013240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:50.468026 containerd[1603]: time="2025-11-05T00:19:50.468004561Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.388153934s" Nov 5 00:19:50.468100 containerd[1603]: time="2025-11-05T00:19:50.468086351Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 00:19:50.468607 containerd[1603]: time="2025-11-05T00:19:50.468580291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 00:19:51.073371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43941497.mount: Deactivated successfully. Nov 5 00:19:51.079407 containerd[1603]: time="2025-11-05T00:19:51.079361526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:19:51.080293 containerd[1603]: time="2025-11-05T00:19:51.080003836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 00:19:51.080926 containerd[1603]: time="2025-11-05T00:19:51.080895997Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:19:51.084291 containerd[1603]: time="2025-11-05T00:19:51.084261459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 00:19:51.084893 containerd[1603]: time="2025-11-05T00:19:51.084864569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 616.041688ms" Nov 5 00:19:51.084936 containerd[1603]: time="2025-11-05T00:19:51.084895619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 00:19:51.085509 containerd[1603]: time="2025-11-05T00:19:51.085476479Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 00:19:51.749192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482414238.mount: Deactivated successfully. Nov 5 00:19:53.264771 containerd[1603]: time="2025-11-05T00:19:53.264728128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:53.265987 containerd[1603]: time="2025-11-05T00:19:53.265742469Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 5 00:19:53.266735 containerd[1603]: time="2025-11-05T00:19:53.266705599Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:53.269415 containerd[1603]: time="2025-11-05T00:19:53.269383830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:19:53.270475 containerd[1603]: time="2025-11-05T00:19:53.270445581Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.184941332s" Nov 5 00:19:53.270521 containerd[1603]: time="2025-11-05T00:19:53.270476161Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 00:19:55.320591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 00:19:55.324510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:19:55.514421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:19:55.521686 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 00:19:55.564036 kubelet[2301]: E1105 00:19:55.563996 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 00:19:55.568047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 00:19:55.568573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 00:19:55.569266 systemd[1]: kubelet.service: Consumed 195ms CPU time, 109.5M memory peak. Nov 5 00:19:57.035756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:19:57.035982 systemd[1]: kubelet.service: Consumed 195ms CPU time, 109.5M memory peak. Nov 5 00:19:57.038291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:19:57.072664 systemd[1]: Reload requested from client PID 2315 ('systemctl') (unit session-7.scope)... Nov 5 00:19:57.072680 systemd[1]: Reloading... Nov 5 00:19:57.228611 zram_generator::config[2360]: No configuration found. Nov 5 00:19:57.445075 systemd[1]: Reloading finished in 372 ms. Nov 5 00:19:57.506828 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 00:19:57.506965 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 00:19:57.507352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:19:57.507415 systemd[1]: kubelet.service: Consumed 156ms CPU time, 98.4M memory peak. Nov 5 00:19:57.509087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:19:57.712969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:19:57.722697 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 00:19:57.762755 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:19:57.762755 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 00:19:57.762755 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:19:57.763117 kubelet[2415]: I1105 00:19:57.762791 2415 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 00:19:58.002878 kubelet[2415]: I1105 00:19:58.002581 2415 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 00:19:58.002878 kubelet[2415]: I1105 00:19:58.002611 2415 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 00:19:58.002878 kubelet[2415]: I1105 00:19:58.002857 2415 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 00:19:58.034351 kubelet[2415]: E1105 00:19:58.034298 2415 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.219.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.219.54:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 00:19:58.035135 kubelet[2415]: I1105 00:19:58.034889 2415 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 00:19:58.042051 kubelet[2415]: I1105 00:19:58.042024 2415 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 00:19:58.049647 kubelet[2415]: I1105 00:19:58.049633 2415 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 00:19:58.050040 kubelet[2415]: I1105 00:19:58.050013 2415 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 00:19:58.050410 kubelet[2415]: I1105 00:19:58.050091 2415 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-219-54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 00:19:58.050849 kubelet[2415]: I1105 00:19:58.050536 2415 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 00:19:58.050849 kubelet[2415]: I1105 00:19:58.050551 2415 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 00:19:58.050849 kubelet[2415]: I1105 00:19:58.050670 2415 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:19:58.053023 kubelet[2415]: I1105 00:19:58.053010 2415 kubelet.go:480] "Attempting to sync node with API server" Nov 5 00:19:58.053415 kubelet[2415]: I1105 00:19:58.053401 2415 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 00:19:58.053491 kubelet[2415]: I1105 00:19:58.053482 2415 kubelet.go:386] "Adding apiserver pod source" Nov 5 00:19:58.053553 kubelet[2415]: I1105 00:19:58.053545 2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 00:19:58.065560 kubelet[2415]: E1105 00:19:58.065531 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.219.54:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-219-54&limit=500&resourceVersion=0\": dial tcp 172.234.219.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 00:19:58.067528 kubelet[2415]: E1105 00:19:58.067499 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.219.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.219.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 00:19:58.068625 kubelet[2415]: I1105 00:19:58.068603 2415 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 00:19:58.069393 kubelet[2415]: I1105 00:19:58.069377 2415 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 00:19:58.070373 kubelet[2415]: W1105 00:19:58.070360 2415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 00:19:58.074475 kubelet[2415]: I1105 00:19:58.074459 2415 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 00:19:58.074684 kubelet[2415]: I1105 00:19:58.074661 2415 server.go:1289] "Started kubelet" Nov 5 00:19:58.076119 kubelet[2415]: I1105 00:19:58.076103 2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 00:19:58.080220 kubelet[2415]: E1105 00:19:58.078503 2415 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.219.54:6443/api/v1/namespaces/default/events\": dial tcp 172.234.219.54:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-219-54.1874f454863194b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-219-54,UID:172-234-219-54,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-219-54,},FirstTimestamp:2025-11-05 00:19:58.074533041 +0000 UTC m=+0.346962844,LastTimestamp:2025-11-05 00:19:58.074533041 +0000 UTC m=+0.346962844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-219-54,}" Nov 5 00:19:58.080885 kubelet[2415]: I1105 00:19:58.080850 2415 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 00:19:58.082482 kubelet[2415]: I1105 00:19:58.082467 2415 server.go:317] "Adding debug handlers to kubelet server" Nov 5 00:19:58.086118 kubelet[2415]: I1105 00:19:58.086092 2415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 00:19:58.086603 kubelet[2415]: I1105 00:19:58.086589 2415 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 00:19:58.086813 kubelet[2415]: I1105 00:19:58.086799 2415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 00:19:58.088166 kubelet[2415]: I1105 00:19:58.087954 2415 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 00:19:58.089364 kubelet[2415]: E1105 00:19:58.088410 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-219-54\" not found" Nov 5 00:19:58.089557 kubelet[2415]: I1105 00:19:58.089535 2415 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 00:19:58.089633 kubelet[2415]: I1105 00:19:58.089611 2415 reconciler.go:26] "Reconciler: start to sync state" Nov 5 00:19:58.089719 kubelet[2415]: E1105 00:19:58.089689 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.219.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-219-54?timeout=10s\": dial tcp 172.234.219.54:6443: connect: connection refused" interval="200ms" Nov 5 00:19:58.089910 kubelet[2415]: I1105 00:19:58.089886 2415 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 00:19:58.091307 kubelet[2415]: E1105 00:19:58.090509 2415 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 00:19:58.092351 kubelet[2415]: I1105 00:19:58.092331 2415 factory.go:223] Registration of the containerd container factory successfully Nov 5 00:19:58.092351 kubelet[2415]: I1105 00:19:58.092351 2415 factory.go:223] Registration of the systemd container factory successfully Nov 5 00:19:58.102473 kubelet[2415]: I1105 00:19:58.102446 2415 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 00:19:58.103962 kubelet[2415]: I1105 00:19:58.103947 2415 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 00:19:58.104029 kubelet[2415]: I1105 00:19:58.104020 2415 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 00:19:58.104089 kubelet[2415]: I1105 00:19:58.104080 2415 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 00:19:58.104133 kubelet[2415]: I1105 00:19:58.104125 2415 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 00:19:58.104336 kubelet[2415]: E1105 00:19:58.104312 2415 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 00:19:58.112243 kubelet[2415]: E1105 00:19:58.112223 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.219.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.219.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 00:19:58.112450 kubelet[2415]: E1105 00:19:58.112431 2415 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.219.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.219.54:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 00:19:58.119574 kubelet[2415]: I1105 00:19:58.119559 2415 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 00:19:58.119670 kubelet[2415]: I1105 00:19:58.119656 2415 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 00:19:58.119734 kubelet[2415]: I1105 00:19:58.119725 2415 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:19:58.121630 kubelet[2415]: I1105 00:19:58.121617 2415 policy_none.go:49] "None policy: Start" Nov 5 00:19:58.121706 kubelet[2415]: I1105 00:19:58.121696 2415 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 00:19:58.121764 kubelet[2415]: I1105 00:19:58.121755 2415 state_mem.go:35] "Initializing new in-memory state store" Nov 5 00:19:58.128654 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 00:19:58.150206 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 00:19:58.154001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 00:19:58.173033 kubelet[2415]: E1105 00:19:58.172991 2415 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 00:19:58.173342 kubelet[2415]: I1105 00:19:58.173303 2415 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 00:19:58.173342 kubelet[2415]: I1105 00:19:58.173324 2415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 00:19:58.173780 kubelet[2415]: I1105 00:19:58.173708 2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 00:19:58.175789 kubelet[2415]: E1105 00:19:58.175761 2415 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 00:19:58.175838 kubelet[2415]: E1105 00:19:58.175800 2415 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-219-54\" not found" Nov 5 00:19:58.218670 systemd[1]: Created slice kubepods-burstable-pod26d3459e023f50debfba804673cf668c.slice - libcontainer container kubepods-burstable-pod26d3459e023f50debfba804673cf668c.slice. Nov 5 00:19:58.229353 kubelet[2415]: E1105 00:19:58.229316 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:19:58.233240 systemd[1]: Created slice kubepods-burstable-pod451f69159bc6d42d2e2f196a3322e8ae.slice - libcontainer container kubepods-burstable-pod451f69159bc6d42d2e2f196a3322e8ae.slice. Nov 5 00:19:58.236013 kubelet[2415]: E1105 00:19:58.235967 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:19:58.239524 systemd[1]: Created slice kubepods-burstable-podea39544c3c8c8de584720b9302ccfaa7.slice - libcontainer container kubepods-burstable-podea39544c3c8c8de584720b9302ccfaa7.slice. Nov 5 00:19:58.243548 kubelet[2415]: E1105 00:19:58.243395 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:19:58.275770 kubelet[2415]: I1105 00:19:58.275650 2415 kubelet_node_status.go:75] "Attempting to register node" node="172-234-219-54" Nov 5 00:19:58.276420 kubelet[2415]: E1105 00:19:58.276372 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.219.54:6443/api/v1/nodes\": dial tcp 172.234.219.54:6443: connect: connection refused" node="172-234-219-54" Nov 5 00:19:58.290771 kubelet[2415]: E1105 00:19:58.290742 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.219.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-219-54?timeout=10s\": dial tcp 172.234.219.54:6443: connect: connection refused" interval="400ms" Nov 5 00:19:58.391998 kubelet[2415]: I1105 00:19:58.390735 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-flexvolume-dir\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:19:58.392438 kubelet[2415]: I1105 00:19:58.392274 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-k8s-certs\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:19:58.392438 kubelet[2415]: I1105 00:19:58.392354 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:19:58.392808 kubelet[2415]: I1105 00:19:58.392382 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-ca-certs\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:19:58.392808 kubelet[2415]: I1105 00:19:58.392717 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-kubeconfig\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:19:58.392965 kubelet[2415]: I1105 00:19:58.392904 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/451f69159bc6d42d2e2f196a3322e8ae-kubeconfig\") pod \"kube-scheduler-172-234-219-54\" (UID: \"451f69159bc6d42d2e2f196a3322e8ae\") " pod="kube-system/kube-scheduler-172-234-219-54" Nov 5 00:19:58.392965 kubelet[2415]: I1105 00:19:58.392936 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea39544c3c8c8de584720b9302ccfaa7-ca-certs\") pod \"kube-apiserver-172-234-219-54\" (UID: \"ea39544c3c8c8de584720b9302ccfaa7\") " pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:19:58.394776 kubelet[2415]: I1105 00:19:58.394251 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea39544c3c8c8de584720b9302ccfaa7-k8s-certs\") pod \"kube-apiserver-172-234-219-54\" (UID: \"ea39544c3c8c8de584720b9302ccfaa7\") " pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:19:58.394955 kubelet[2415]: I1105 00:19:58.394928 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea39544c3c8c8de584720b9302ccfaa7-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-219-54\" (UID: \"ea39544c3c8c8de584720b9302ccfaa7\") " pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:19:58.480284 kubelet[2415]: I1105 00:19:58.480220 2415 kubelet_node_status.go:75] "Attempting to register node" node="172-234-219-54" Nov 5 00:19:58.480726 kubelet[2415]: E1105 00:19:58.480690 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.219.54:6443/api/v1/nodes\": dial tcp 172.234.219.54:6443: connect: connection refused" node="172-234-219-54" Nov 5 00:19:58.530764 kubelet[2415]: E1105 00:19:58.530579 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:58.531885 containerd[1603]: time="2025-11-05T00:19:58.531695600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-219-54,Uid:26d3459e023f50debfba804673cf668c,Namespace:kube-system,Attempt:0,}" Nov 5 00:19:58.537854 kubelet[2415]: E1105 00:19:58.537490 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:58.538492 containerd[1603]: time="2025-11-05T00:19:58.538436173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-219-54,Uid:451f69159bc6d42d2e2f196a3322e8ae,Namespace:kube-system,Attempt:0,}" Nov 5 00:19:58.548410 kubelet[2415]: E1105 00:19:58.548382 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:58.557687 containerd[1603]: time="2025-11-05T00:19:58.557649803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-219-54,Uid:ea39544c3c8c8de584720b9302ccfaa7,Namespace:kube-system,Attempt:0,}" Nov 5 00:19:58.565026 containerd[1603]: time="2025-11-05T00:19:58.564997836Z" level=info msg="connecting to shim efdfccea7eec4bfc5340f7f820703534dfa68746888a0eca11adf563ebacde94" address="unix:///run/containerd/s/b67001d99086f0e20f813ef2647616e9bd0dd0dc11774edf3426fd2b4be948a5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:19:58.582433 containerd[1603]: time="2025-11-05T00:19:58.582316395Z" level=info msg="connecting to shim 57e694c8ae6b438ee6cce78aa13f570c0638224f6fd6820f7e668fa7748667b6" address="unix:///run/containerd/s/bc6bd4ecc3958be199e3963ce48a49dbb1fae90b786ec9ace11d14e376ca9bf1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:19:58.596379 containerd[1603]: time="2025-11-05T00:19:58.596258492Z" level=info msg="connecting to shim 472853e73660596db074aa2f2ded3307a6ea625105fbddbd14a1df05233f2579" address="unix:///run/containerd/s/147d53d9c616fd5d0c8c4c75ba2f2a7848a38fa46354676e848eda409e4b0008" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:19:58.615388 systemd[1]: Started cri-containerd-efdfccea7eec4bfc5340f7f820703534dfa68746888a0eca11adf563ebacde94.scope - libcontainer container efdfccea7eec4bfc5340f7f820703534dfa68746888a0eca11adf563ebacde94. Nov 5 00:19:58.640595 systemd[1]: Started cri-containerd-472853e73660596db074aa2f2ded3307a6ea625105fbddbd14a1df05233f2579.scope - libcontainer container 472853e73660596db074aa2f2ded3307a6ea625105fbddbd14a1df05233f2579. Nov 5 00:19:58.652790 systemd[1]: Started cri-containerd-57e694c8ae6b438ee6cce78aa13f570c0638224f6fd6820f7e668fa7748667b6.scope - libcontainer container 57e694c8ae6b438ee6cce78aa13f570c0638224f6fd6820f7e668fa7748667b6. Nov 5 00:19:58.693678 kubelet[2415]: E1105 00:19:58.693610 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.219.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-219-54?timeout=10s\": dial tcp 172.234.219.54:6443: connect: connection refused" interval="800ms" Nov 5 00:19:58.727918 containerd[1603]: time="2025-11-05T00:19:58.727874078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-219-54,Uid:451f69159bc6d42d2e2f196a3322e8ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"472853e73660596db074aa2f2ded3307a6ea625105fbddbd14a1df05233f2579\"" Nov 5 00:19:58.729776 kubelet[2415]: E1105 00:19:58.729745 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:58.736807 containerd[1603]: time="2025-11-05T00:19:58.736768632Z" level=info msg="CreateContainer within sandbox \"472853e73660596db074aa2f2ded3307a6ea625105fbddbd14a1df05233f2579\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 00:19:58.740655 containerd[1603]: time="2025-11-05T00:19:58.740634504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-219-54,Uid:26d3459e023f50debfba804673cf668c,Namespace:kube-system,Attempt:0,} returns sandbox id \"efdfccea7eec4bfc5340f7f820703534dfa68746888a0eca11adf563ebacde94\"" Nov 5 00:19:58.742871 kubelet[2415]: E1105 00:19:58.742800 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:58.744943 containerd[1603]: time="2025-11-05T00:19:58.744924666Z" level=info msg="Container e789846bc8294ece7f9a5d7351e92b16ce15cb7386d898cf1ce6a130b7428d3f: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:19:58.748969 containerd[1603]: time="2025-11-05T00:19:58.748930988Z" level=info msg="CreateContainer within sandbox \"efdfccea7eec4bfc5340f7f820703534dfa68746888a0eca11adf563ebacde94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 00:19:58.753378 containerd[1603]: time="2025-11-05T00:19:58.753357360Z" level=info msg="CreateContainer within sandbox \"472853e73660596db074aa2f2ded3307a6ea625105fbddbd14a1df05233f2579\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e789846bc8294ece7f9a5d7351e92b16ce15cb7386d898cf1ce6a130b7428d3f\"" Nov 5 00:19:58.755810 containerd[1603]: time="2025-11-05T00:19:58.755788912Z" level=info msg="StartContainer for \"e789846bc8294ece7f9a5d7351e92b16ce15cb7386d898cf1ce6a130b7428d3f\"" Nov 5 00:19:58.757496 containerd[1603]: time="2025-11-05T00:19:58.757476593Z" level=info msg="connecting to shim e789846bc8294ece7f9a5d7351e92b16ce15cb7386d898cf1ce6a130b7428d3f" address="unix:///run/containerd/s/147d53d9c616fd5d0c8c4c75ba2f2a7848a38fa46354676e848eda409e4b0008" protocol=ttrpc version=3 Nov 5 00:19:58.761553 containerd[1603]: time="2025-11-05T00:19:58.761474064Z" level=info msg="Container a848481e688443abe5d474bb3fca3bcf48259919d9cf5dba9454214cd7da24e2: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:19:58.770702 containerd[1603]: time="2025-11-05T00:19:58.770679209Z" level=info msg="CreateContainer within sandbox \"efdfccea7eec4bfc5340f7f820703534dfa68746888a0eca11adf563ebacde94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a848481e688443abe5d474bb3fca3bcf48259919d9cf5dba9454214cd7da24e2\"" Nov 5 00:19:58.771419 containerd[1603]: time="2025-11-05T00:19:58.771400499Z" level=info msg="StartContainer for \"a848481e688443abe5d474bb3fca3bcf48259919d9cf5dba9454214cd7da24e2\"" Nov 5 00:19:58.771517 containerd[1603]: time="2025-11-05T00:19:58.771487770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-219-54,Uid:ea39544c3c8c8de584720b9302ccfaa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"57e694c8ae6b438ee6cce78aa13f570c0638224f6fd6820f7e668fa7748667b6\"" Nov 5 00:19:58.772975 kubelet[2415]: E1105 00:19:58.772914 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:58.773314 containerd[1603]: time="2025-11-05T00:19:58.772803410Z" level=info msg="connecting to shim a848481e688443abe5d474bb3fca3bcf48259919d9cf5dba9454214cd7da24e2" address="unix:///run/containerd/s/b67001d99086f0e20f813ef2647616e9bd0dd0dc11774edf3426fd2b4be948a5" protocol=ttrpc version=3 Nov 5 00:19:58.776354 containerd[1603]: time="2025-11-05T00:19:58.776285782Z" level=info msg="CreateContainer within sandbox \"57e694c8ae6b438ee6cce78aa13f570c0638224f6fd6820f7e668fa7748667b6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 00:19:58.789068 containerd[1603]: time="2025-11-05T00:19:58.788976498Z" level=info msg="Container 8b188be413b8d3c7eae8bd1ed24316233a60f8f08c011381cffc7f71c8ec0572: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:19:58.794327 systemd[1]: Started cri-containerd-e789846bc8294ece7f9a5d7351e92b16ce15cb7386d898cf1ce6a130b7428d3f.scope - libcontainer container e789846bc8294ece7f9a5d7351e92b16ce15cb7386d898cf1ce6a130b7428d3f. Nov 5 00:19:58.803442 systemd[1]: Started cri-containerd-a848481e688443abe5d474bb3fca3bcf48259919d9cf5dba9454214cd7da24e2.scope - libcontainer container a848481e688443abe5d474bb3fca3bcf48259919d9cf5dba9454214cd7da24e2. Nov 5 00:19:58.804824 containerd[1603]: time="2025-11-05T00:19:58.804213476Z" level=info msg="CreateContainer within sandbox \"57e694c8ae6b438ee6cce78aa13f570c0638224f6fd6820f7e668fa7748667b6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8b188be413b8d3c7eae8bd1ed24316233a60f8f08c011381cffc7f71c8ec0572\"" Nov 5 00:19:58.806526 containerd[1603]: time="2025-11-05T00:19:58.806488677Z" level=info msg="StartContainer for \"8b188be413b8d3c7eae8bd1ed24316233a60f8f08c011381cffc7f71c8ec0572\"" Nov 5 00:19:58.808233 containerd[1603]: time="2025-11-05T00:19:58.807855168Z" level=info msg="connecting to shim 8b188be413b8d3c7eae8bd1ed24316233a60f8f08c011381cffc7f71c8ec0572" address="unix:///run/containerd/s/bc6bd4ecc3958be199e3963ce48a49dbb1fae90b786ec9ace11d14e376ca9bf1" protocol=ttrpc version=3 Nov 5 00:19:58.847313 systemd[1]: Started cri-containerd-8b188be413b8d3c7eae8bd1ed24316233a60f8f08c011381cffc7f71c8ec0572.scope - libcontainer container 8b188be413b8d3c7eae8bd1ed24316233a60f8f08c011381cffc7f71c8ec0572. Nov 5 00:19:58.883313 kubelet[2415]: I1105 00:19:58.883251 2415 kubelet_node_status.go:75] "Attempting to register node" node="172-234-219-54" Nov 5 00:19:58.884689 kubelet[2415]: E1105 00:19:58.884657 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.219.54:6443/api/v1/nodes\": dial tcp 172.234.219.54:6443: connect: connection refused" node="172-234-219-54" Nov 5 00:19:58.913731 containerd[1603]: time="2025-11-05T00:19:58.913688591Z" level=info msg="StartContainer for \"e789846bc8294ece7f9a5d7351e92b16ce15cb7386d898cf1ce6a130b7428d3f\" returns successfully" Nov 5 00:19:58.929502 containerd[1603]: time="2025-11-05T00:19:58.929468768Z" level=info msg="StartContainer for \"a848481e688443abe5d474bb3fca3bcf48259919d9cf5dba9454214cd7da24e2\" returns successfully" Nov 5 00:19:58.955498 containerd[1603]: time="2025-11-05T00:19:58.955454871Z" level=info msg="StartContainer for \"8b188be413b8d3c7eae8bd1ed24316233a60f8f08c011381cffc7f71c8ec0572\" returns successfully" Nov 5 00:19:59.124699 kubelet[2415]: E1105 00:19:59.124672 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:19:59.124818 kubelet[2415]: E1105 00:19:59.124794 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:59.129637 kubelet[2415]: E1105 00:19:59.129613 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:19:59.129727 kubelet[2415]: E1105 00:19:59.129705 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:59.130930 kubelet[2415]: E1105 00:19:59.130907 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:19:59.131220 kubelet[2415]: E1105 00:19:59.131197 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:19:59.687398 kubelet[2415]: I1105 00:19:59.686955 2415 kubelet_node_status.go:75] "Attempting to register node" node="172-234-219-54" Nov 5 00:20:00.134334 kubelet[2415]: E1105 00:20:00.134305 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:20:00.134730 kubelet[2415]: E1105 00:20:00.134456 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:00.134730 kubelet[2415]: E1105 00:20:00.134654 2415 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:20:00.134730 kubelet[2415]: E1105 00:20:00.134729 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:00.883048 kubelet[2415]: E1105 00:20:00.882991 2415 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-219-54\" not found" node="172-234-219-54" Nov 5 00:20:01.027206 kubelet[2415]: I1105 00:20:01.026849 2415 kubelet_node_status.go:78] "Successfully registered node" node="172-234-219-54" Nov 5 00:20:01.027206 kubelet[2415]: E1105 00:20:01.027016 2415 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-234-219-54\": node \"172-234-219-54\" not found" Nov 5 00:20:01.067162 kubelet[2415]: I1105 00:20:01.067135 2415 apiserver.go:52] "Watching apiserver" Nov 5 00:20:01.090154 kubelet[2415]: I1105 00:20:01.090127 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:20:01.090340 kubelet[2415]: I1105 00:20:01.090317 2415 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 00:20:01.097241 kubelet[2415]: E1105 00:20:01.097113 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-219-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:20:01.097241 kubelet[2415]: I1105 00:20:01.097131 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-219-54" Nov 5 00:20:01.100477 kubelet[2415]: E1105 00:20:01.100457 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-219-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-219-54" Nov 5 00:20:01.100477 kubelet[2415]: I1105 00:20:01.100474 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:01.101979 kubelet[2415]: E1105 00:20:01.101937 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-219-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:01.132591 kubelet[2415]: I1105 00:20:01.132572 2415 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:01.134267 kubelet[2415]: E1105 00:20:01.134108 2415 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-219-54\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:01.134267 kubelet[2415]: E1105 00:20:01.134264 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:02.581860 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 00:20:03.187918 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-7.scope)... Nov 5 00:20:03.188073 systemd[1]: Reloading... Nov 5 00:20:03.326456 zram_generator::config[2741]: No configuration found. Nov 5 00:20:03.576770 systemd[1]: Reloading finished in 388 ms. Nov 5 00:20:03.610340 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:20:03.623062 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 00:20:03.623506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:20:03.623768 systemd[1]: kubelet.service: Consumed 772ms CPU time, 131M memory peak. Nov 5 00:20:03.627910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 00:20:03.819580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 00:20:03.831903 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 00:20:03.892243 kubelet[2792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:20:03.892243 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 00:20:03.892243 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 00:20:03.892662 kubelet[2792]: I1105 00:20:03.892630 2792 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 00:20:03.901842 kubelet[2792]: I1105 00:20:03.901823 2792 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 00:20:03.901915 kubelet[2792]: I1105 00:20:03.901905 2792 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 00:20:03.902160 kubelet[2792]: I1105 00:20:03.902148 2792 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 00:20:03.903979 kubelet[2792]: I1105 00:20:03.903756 2792 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 00:20:03.906450 kubelet[2792]: I1105 00:20:03.906433 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 00:20:03.911770 kubelet[2792]: I1105 00:20:03.911755 2792 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 00:20:03.917804 kubelet[2792]: I1105 00:20:03.917785 2792 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 00:20:03.918682 kubelet[2792]: I1105 00:20:03.918620 2792 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 00:20:03.918900 kubelet[2792]: I1105 00:20:03.918680 2792 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-219-54","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 00:20:03.918900 kubelet[2792]: I1105 00:20:03.918877 2792 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 00:20:03.918900 kubelet[2792]: I1105 00:20:03.918890 2792 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 00:20:03.919041 kubelet[2792]: I1105 00:20:03.918944 2792 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:20:03.919141 kubelet[2792]: I1105 00:20:03.919126 2792 kubelet.go:480] "Attempting to sync node with API server" Nov 5 00:20:03.919397 kubelet[2792]: I1105 00:20:03.919145 2792 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 00:20:03.919397 kubelet[2792]: I1105 00:20:03.919174 2792 kubelet.go:386] "Adding apiserver pod source" Nov 5 00:20:03.919458 kubelet[2792]: I1105 00:20:03.919417 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 00:20:03.924431 kubelet[2792]: I1105 00:20:03.922049 2792 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 00:20:03.924431 kubelet[2792]: I1105 00:20:03.922698 2792 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 00:20:03.926813 kubelet[2792]: I1105 00:20:03.926584 2792 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 00:20:03.926813 kubelet[2792]: I1105 00:20:03.926633 2792 server.go:1289] "Started kubelet" Nov 5 00:20:03.929810 kubelet[2792]: I1105 00:20:03.928658 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 00:20:03.938972 kubelet[2792]: I1105 00:20:03.938341 2792 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 00:20:03.939367 kubelet[2792]: I1105 00:20:03.939175 2792 server.go:317] "Adding debug handlers to kubelet server" Nov 5 00:20:03.948404 kubelet[2792]: I1105 00:20:03.947448 2792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 00:20:03.948404 kubelet[2792]: I1105 00:20:03.947653 2792 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 00:20:03.948404 kubelet[2792]: I1105 00:20:03.947877 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 00:20:03.949782 kubelet[2792]: I1105 00:20:03.949767 2792 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 00:20:03.950028 kubelet[2792]: E1105 00:20:03.950001 2792 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-219-54\" not found" Nov 5 00:20:03.951265 kubelet[2792]: I1105 00:20:03.951238 2792 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 00:20:03.951476 kubelet[2792]: I1105 00:20:03.951445 2792 reconciler.go:26] "Reconciler: start to sync state" Nov 5 00:20:03.955721 kubelet[2792]: I1105 00:20:03.955688 2792 factory.go:223] Registration of the systemd container factory successfully Nov 5 00:20:03.956168 kubelet[2792]: I1105 00:20:03.955844 2792 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 00:20:03.963393 kubelet[2792]: E1105 00:20:03.962045 2792 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 00:20:03.963393 kubelet[2792]: I1105 00:20:03.962765 2792 factory.go:223] Registration of the containerd container factory successfully Nov 5 00:20:03.983501 kubelet[2792]: I1105 00:20:03.983459 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 00:20:03.986981 kubelet[2792]: I1105 00:20:03.986958 2792 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 00:20:03.986981 kubelet[2792]: I1105 00:20:03.986984 2792 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 00:20:03.987061 kubelet[2792]: I1105 00:20:03.987041 2792 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 00:20:03.987061 kubelet[2792]: I1105 00:20:03.987053 2792 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 00:20:03.987459 kubelet[2792]: E1105 00:20:03.987145 2792 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 00:20:04.040477 kubelet[2792]: I1105 00:20:04.040424 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 00:20:04.040477 kubelet[2792]: I1105 00:20:04.040448 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 00:20:04.040477 kubelet[2792]: I1105 00:20:04.040471 2792 state_mem.go:36] "Initialized new in-memory state store" Nov 5 00:20:04.040647 kubelet[2792]: I1105 00:20:04.040623 2792 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 00:20:04.040709 kubelet[2792]: I1105 00:20:04.040643 2792 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 00:20:04.040709 kubelet[2792]: I1105 00:20:04.040665 2792 policy_none.go:49] "None policy: Start" Nov 5 00:20:04.040709 kubelet[2792]: I1105 00:20:04.040676 2792 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 00:20:04.040709 kubelet[2792]: I1105 00:20:04.040689 2792 state_mem.go:35] "Initializing new in-memory state store" Nov 5 00:20:04.040808 kubelet[2792]: I1105 00:20:04.040787 2792 state_mem.go:75] "Updated machine memory state" Nov 5 00:20:04.046803 kubelet[2792]: E1105 00:20:04.046609 2792 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 00:20:04.046803 kubelet[2792]: I1105 00:20:04.046801 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 00:20:04.046896 kubelet[2792]: I1105 00:20:04.046816 2792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 00:20:04.047114 kubelet[2792]: I1105 00:20:04.047067 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 00:20:04.051917 kubelet[2792]: E1105 00:20:04.051890 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 00:20:04.089970 kubelet[2792]: I1105 00:20:04.088886 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-219-54" Nov 5 00:20:04.089970 kubelet[2792]: I1105 00:20:04.089703 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:20:04.089970 kubelet[2792]: I1105 00:20:04.089877 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:04.149874 kubelet[2792]: I1105 00:20:04.149838 2792 kubelet_node_status.go:75] "Attempting to register node" node="172-234-219-54" Nov 5 00:20:04.151983 kubelet[2792]: I1105 00:20:04.151652 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-kubeconfig\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:20:04.151983 kubelet[2792]: I1105 00:20:04.151682 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:20:04.151983 kubelet[2792]: I1105 00:20:04.151701 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/451f69159bc6d42d2e2f196a3322e8ae-kubeconfig\") pod \"kube-scheduler-172-234-219-54\" (UID: \"451f69159bc6d42d2e2f196a3322e8ae\") " pod="kube-system/kube-scheduler-172-234-219-54" Nov 5 00:20:04.151983 kubelet[2792]: I1105 00:20:04.151718 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea39544c3c8c8de584720b9302ccfaa7-k8s-certs\") pod \"kube-apiserver-172-234-219-54\" (UID: \"ea39544c3c8c8de584720b9302ccfaa7\") " pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:04.151983 kubelet[2792]: I1105 00:20:04.151731 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-ca-certs\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:20:04.152885 kubelet[2792]: I1105 00:20:04.151746 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-flexvolume-dir\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:20:04.152885 kubelet[2792]: I1105 00:20:04.151760 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea39544c3c8c8de584720b9302ccfaa7-ca-certs\") pod \"kube-apiserver-172-234-219-54\" (UID: \"ea39544c3c8c8de584720b9302ccfaa7\") " pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:04.152885 kubelet[2792]: I1105 00:20:04.151773 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea39544c3c8c8de584720b9302ccfaa7-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-219-54\" (UID: \"ea39544c3c8c8de584720b9302ccfaa7\") " pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:04.152885 kubelet[2792]: I1105 00:20:04.151787 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26d3459e023f50debfba804673cf668c-k8s-certs\") pod \"kube-controller-manager-172-234-219-54\" (UID: \"26d3459e023f50debfba804673cf668c\") " pod="kube-system/kube-controller-manager-172-234-219-54" Nov 5 00:20:04.159540 kubelet[2792]: I1105 00:20:04.159490 2792 kubelet_node_status.go:124] "Node was previously registered" node="172-234-219-54" Nov 5 00:20:04.159662 kubelet[2792]: I1105 00:20:04.159632 2792 kubelet_node_status.go:78] "Successfully registered node" node="172-234-219-54" Nov 5 00:20:04.189680 sudo[2828]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 5 00:20:04.190333 sudo[2828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 5 00:20:04.396235 kubelet[2792]: E1105 00:20:04.396108 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:04.398728 kubelet[2792]: E1105 00:20:04.398688 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:04.398918 kubelet[2792]: E1105 00:20:04.398859 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:04.549874 sudo[2828]: pam_unix(sudo:session): session closed for user root Nov 5 00:20:04.920947 kubelet[2792]: I1105 00:20:04.920786 2792 apiserver.go:52] "Watching apiserver" Nov 5 00:20:04.952448 kubelet[2792]: I1105 00:20:04.952155 2792 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 00:20:05.023007 kubelet[2792]: I1105 00:20:05.022972 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:05.024528 kubelet[2792]: I1105 00:20:05.024144 2792 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-219-54" Nov 5 00:20:05.025458 kubelet[2792]: E1105 00:20:05.024647 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:05.029309 kubelet[2792]: E1105 00:20:05.029240 2792 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-219-54\" already exists" pod="kube-system/kube-apiserver-172-234-219-54" Nov 5 00:20:05.029542 kubelet[2792]: E1105 00:20:05.029528 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:05.032540 kubelet[2792]: E1105 00:20:05.032441 2792 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-219-54\" already exists" pod="kube-system/kube-scheduler-172-234-219-54" Nov 5 00:20:05.033336 kubelet[2792]: E1105 00:20:05.033321 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:05.061159 kubelet[2792]: I1105 00:20:05.060951 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-219-54" podStartSLOduration=1.060939824 podStartE2EDuration="1.060939824s" podCreationTimestamp="2025-11-05 00:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:20:05.05999059 +0000 UTC m=+1.221075900" watchObservedRunningTime="2025-11-05 00:20:05.060939824 +0000 UTC m=+1.222025134" Nov 5 00:20:05.061159 kubelet[2792]: I1105 00:20:05.061036 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-219-54" podStartSLOduration=1.061031741 podStartE2EDuration="1.061031741s" podCreationTimestamp="2025-11-05 00:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:20:05.04925499 +0000 UTC m=+1.210340320" watchObservedRunningTime="2025-11-05 00:20:05.061031741 +0000 UTC m=+1.222117051" Nov 5 00:20:05.081121 kubelet[2792]: I1105 00:20:05.081020 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-219-54" podStartSLOduration=1.081004356 podStartE2EDuration="1.081004356s" podCreationTimestamp="2025-11-05 00:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:20:05.07176542 +0000 UTC m=+1.232850750" watchObservedRunningTime="2025-11-05 00:20:05.081004356 +0000 UTC m=+1.242089666" Nov 5 00:20:05.987796 sudo[1844]: pam_unix(sudo:session): session closed for user root Nov 5 00:20:06.024884 kubelet[2792]: E1105 00:20:06.024814 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:06.024884 kubelet[2792]: E1105 00:20:06.024841 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:06.042785 sshd[1843]: Connection closed by 139.178.68.195 port 33494 Nov 5 00:20:06.043238 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Nov 5 00:20:06.047685 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Nov 5 00:20:06.047896 systemd[1]: sshd@6-172.234.219.54:22-139.178.68.195:33494.service: Deactivated successfully. Nov 5 00:20:06.050379 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 00:20:06.050594 systemd[1]: session-7.scope: Consumed 5.591s CPU time, 272.7M memory peak. Nov 5 00:20:06.053514 systemd-logind[1575]: Removed session 7. Nov 5 00:20:07.027227 kubelet[2792]: E1105 00:20:07.027168 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:08.028200 kubelet[2792]: E1105 00:20:08.028141 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:08.220610 kubelet[2792]: I1105 00:20:08.220575 2792 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 00:20:08.220942 containerd[1603]: time="2025-11-05T00:20:08.220905598Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 00:20:08.221788 kubelet[2792]: I1105 00:20:08.221339 2792 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 00:20:08.300509 kubelet[2792]: E1105 00:20:08.300366 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:09.012868 systemd[1]: Created slice kubepods-besteffort-pod42371ca9_2b4d_4192_9e75_6cb814ff99a5.slice - libcontainer container kubepods-besteffort-pod42371ca9_2b4d_4192_9e75_6cb814ff99a5.slice. Nov 5 00:20:09.032457 systemd[1]: Created slice kubepods-burstable-pod8fc3a528_f94c_454d_b240_d94a845ce41f.slice - libcontainer container kubepods-burstable-pod8fc3a528_f94c_454d_b240_d94a845ce41f.slice. Nov 5 00:20:09.035999 kubelet[2792]: E1105 00:20:09.035911 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:09.088203 kubelet[2792]: I1105 00:20:09.087840 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42371ca9-2b4d-4192-9e75-6cb814ff99a5-kube-proxy\") pod \"kube-proxy-8n27l\" (UID: \"42371ca9-2b4d-4192-9e75-6cb814ff99a5\") " pod="kube-system/kube-proxy-8n27l" Nov 5 00:20:09.088203 kubelet[2792]: I1105 00:20:09.087868 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-config-path\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.088203 kubelet[2792]: I1105 00:20:09.087888 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-bpf-maps\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.088203 kubelet[2792]: I1105 00:20:09.087903 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cni-path\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.088203 kubelet[2792]: I1105 00:20:09.087915 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-etc-cni-netd\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.088203 kubelet[2792]: I1105 00:20:09.087930 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbdxj\" (UniqueName: \"kubernetes.io/projected/8fc3a528-f94c-454d-b240-d94a845ce41f-kube-api-access-sbdxj\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.088412 kubelet[2792]: I1105 00:20:09.087944 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42371ca9-2b4d-4192-9e75-6cb814ff99a5-lib-modules\") pod \"kube-proxy-8n27l\" (UID: \"42371ca9-2b4d-4192-9e75-6cb814ff99a5\") " pod="kube-system/kube-proxy-8n27l" Nov 5 00:20:09.088412 kubelet[2792]: I1105 00:20:09.088164 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-hostproc\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.088931 kubelet[2792]: I1105 00:20:09.088913 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-lib-modules\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.089076 kubelet[2792]: I1105 00:20:09.089062 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-xtables-lock\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.089717 kubelet[2792]: I1105 00:20:09.089659 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fc3a528-f94c-454d-b240-d94a845ce41f-clustermesh-secrets\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.089903 kubelet[2792]: I1105 00:20:09.089889 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-host-proc-sys-kernel\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.090257 kubelet[2792]: I1105 00:20:09.090242 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42371ca9-2b4d-4192-9e75-6cb814ff99a5-xtables-lock\") pod \"kube-proxy-8n27l\" (UID: \"42371ca9-2b4d-4192-9e75-6cb814ff99a5\") " pod="kube-system/kube-proxy-8n27l" Nov 5 00:20:09.090395 kubelet[2792]: I1105 00:20:09.090381 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zfl2\" (UniqueName: \"kubernetes.io/projected/42371ca9-2b4d-4192-9e75-6cb814ff99a5-kube-api-access-9zfl2\") pod \"kube-proxy-8n27l\" (UID: \"42371ca9-2b4d-4192-9e75-6cb814ff99a5\") " pod="kube-system/kube-proxy-8n27l" Nov 5 00:20:09.090535 kubelet[2792]: I1105 00:20:09.090514 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-run\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.090895 kubelet[2792]: I1105 00:20:09.090769 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-cgroup\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.091335 kubelet[2792]: I1105 00:20:09.091307 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-host-proc-sys-net\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.091563 kubelet[2792]: I1105 00:20:09.091431 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fc3a528-f94c-454d-b240-d94a845ce41f-hubble-tls\") pod \"cilium-7b2mw\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " pod="kube-system/cilium-7b2mw" Nov 5 00:20:09.100648 kubelet[2792]: I1105 00:20:09.100512 2792 status_manager.go:895] "Failed to get status for pod" podUID="73d6cb90-6df3-4a48-ba22-e889a2919a11" pod="kube-system/cilium-operator-6c4d7847fc-xjwnf" err="pods \"cilium-operator-6c4d7847fc-xjwnf\" is forbidden: User \"system:node:172-234-219-54\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-219-54' and this object" Nov 5 00:20:09.107822 systemd[1]: Created slice kubepods-besteffort-pod73d6cb90_6df3_4a48_ba22_e889a2919a11.slice - libcontainer container kubepods-besteffort-pod73d6cb90_6df3_4a48_ba22_e889a2919a11.slice. Nov 5 00:20:09.192544 kubelet[2792]: I1105 00:20:09.192499 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73d6cb90-6df3-4a48-ba22-e889a2919a11-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xjwnf\" (UID: \"73d6cb90-6df3-4a48-ba22-e889a2919a11\") " pod="kube-system/cilium-operator-6c4d7847fc-xjwnf" Nov 5 00:20:09.192666 kubelet[2792]: I1105 00:20:09.192636 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knk8f\" (UniqueName: \"kubernetes.io/projected/73d6cb90-6df3-4a48-ba22-e889a2919a11-kube-api-access-knk8f\") pod \"cilium-operator-6c4d7847fc-xjwnf\" (UID: \"73d6cb90-6df3-4a48-ba22-e889a2919a11\") " pod="kube-system/cilium-operator-6c4d7847fc-xjwnf" Nov 5 00:20:09.321726 kubelet[2792]: E1105 00:20:09.321639 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:09.323332 containerd[1603]: time="2025-11-05T00:20:09.323272073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8n27l,Uid:42371ca9-2b4d-4192-9e75-6cb814ff99a5,Namespace:kube-system,Attempt:0,}" Nov 5 00:20:09.340125 containerd[1603]: time="2025-11-05T00:20:09.340073101Z" level=info msg="connecting to shim 70fcafc58ce5949eff9525a7c5bc441f1238ffe640d23926f0f6c9b37131b906" address="unix:///run/containerd/s/b7affda6cc136569927c92c744d7218c1324cedd83b169ea0b030980e69666a1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:20:09.341279 kubelet[2792]: E1105 00:20:09.341228 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:09.343244 containerd[1603]: time="2025-11-05T00:20:09.342990508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7b2mw,Uid:8fc3a528-f94c-454d-b240-d94a845ce41f,Namespace:kube-system,Attempt:0,}" Nov 5 00:20:09.364954 containerd[1603]: time="2025-11-05T00:20:09.364926879Z" level=info msg="connecting to shim 5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a" address="unix:///run/containerd/s/645050799b8a86a5741c6dc702b6134d322621d7bd1643f5a4d1b0197a8605b2" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:20:09.376623 systemd[1]: Started cri-containerd-70fcafc58ce5949eff9525a7c5bc441f1238ffe640d23926f0f6c9b37131b906.scope - libcontainer container 70fcafc58ce5949eff9525a7c5bc441f1238ffe640d23926f0f6c9b37131b906. Nov 5 00:20:09.411902 kubelet[2792]: E1105 00:20:09.411704 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:09.415240 containerd[1603]: time="2025-11-05T00:20:09.415194617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xjwnf,Uid:73d6cb90-6df3-4a48-ba22-e889a2919a11,Namespace:kube-system,Attempt:0,}" Nov 5 00:20:09.416530 systemd[1]: Started cri-containerd-5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a.scope - libcontainer container 5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a. Nov 5 00:20:09.433474 containerd[1603]: time="2025-11-05T00:20:09.433437044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8n27l,Uid:42371ca9-2b4d-4192-9e75-6cb814ff99a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"70fcafc58ce5949eff9525a7c5bc441f1238ffe640d23926f0f6c9b37131b906\"" Nov 5 00:20:09.434727 kubelet[2792]: E1105 00:20:09.434707 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:09.441898 containerd[1603]: time="2025-11-05T00:20:09.441580040Z" level=info msg="connecting to shim b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4" address="unix:///run/containerd/s/4402567a6066d21b8faf3001c6ff0901c9f06ee1007066366860554118770f1b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:20:09.442990 containerd[1603]: time="2025-11-05T00:20:09.442420606Z" level=info msg="CreateContainer within sandbox \"70fcafc58ce5949eff9525a7c5bc441f1238ffe640d23926f0f6c9b37131b906\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 00:20:09.461914 containerd[1603]: time="2025-11-05T00:20:09.461890917Z" level=info msg="Container 49d93c9c9a28b8ffba73635b5bb4a8476327e4c335151610bbd9def850550845: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:09.473171 containerd[1603]: time="2025-11-05T00:20:09.473130265Z" level=info msg="CreateContainer within sandbox \"70fcafc58ce5949eff9525a7c5bc441f1238ffe640d23926f0f6c9b37131b906\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49d93c9c9a28b8ffba73635b5bb4a8476327e4c335151610bbd9def850550845\"" Nov 5 00:20:09.474672 containerd[1603]: time="2025-11-05T00:20:09.474468476Z" level=info msg="StartContainer for \"49d93c9c9a28b8ffba73635b5bb4a8476327e4c335151610bbd9def850550845\"" Nov 5 00:20:09.475470 systemd[1]: Started cri-containerd-b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4.scope - libcontainer container b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4. Nov 5 00:20:09.476146 containerd[1603]: time="2025-11-05T00:20:09.475588315Z" level=info msg="connecting to shim 49d93c9c9a28b8ffba73635b5bb4a8476327e4c335151610bbd9def850550845" address="unix:///run/containerd/s/b7affda6cc136569927c92c744d7218c1324cedd83b169ea0b030980e69666a1" protocol=ttrpc version=3 Nov 5 00:20:09.486986 containerd[1603]: time="2025-11-05T00:20:09.486657897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7b2mw,Uid:8fc3a528-f94c-454d-b240-d94a845ce41f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\"" Nov 5 00:20:09.488572 kubelet[2792]: E1105 00:20:09.487825 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:09.490252 containerd[1603]: time="2025-11-05T00:20:09.489781427Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 5 00:20:09.505718 systemd[1]: Started cri-containerd-49d93c9c9a28b8ffba73635b5bb4a8476327e4c335151610bbd9def850550845.scope - libcontainer container 49d93c9c9a28b8ffba73635b5bb4a8476327e4c335151610bbd9def850550845. Nov 5 00:20:09.579562 containerd[1603]: time="2025-11-05T00:20:09.579534453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xjwnf,Uid:73d6cb90-6df3-4a48-ba22-e889a2919a11,Namespace:kube-system,Attempt:0,} returns sandbox id \"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\"" Nov 5 00:20:09.581393 containerd[1603]: time="2025-11-05T00:20:09.581334502Z" level=info msg="StartContainer for \"49d93c9c9a28b8ffba73635b5bb4a8476327e4c335151610bbd9def850550845\" returns successfully" Nov 5 00:20:09.583332 kubelet[2792]: E1105 00:20:09.583298 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:10.046407 kubelet[2792]: E1105 00:20:10.046331 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:10.057534 kubelet[2792]: I1105 00:20:10.057468 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8n27l" podStartSLOduration=2.057456569 podStartE2EDuration="2.057456569s" podCreationTimestamp="2025-11-05 00:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:20:10.056251921 +0000 UTC m=+6.217337231" watchObservedRunningTime="2025-11-05 00:20:10.057456569 +0000 UTC m=+6.218541879" Nov 5 00:20:11.768553 kubelet[2792]: E1105 00:20:11.768527 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:12.053865 kubelet[2792]: E1105 00:20:12.053615 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:13.055927 kubelet[2792]: E1105 00:20:13.055877 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:13.809351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970715277.mount: Deactivated successfully. Nov 5 00:20:15.700446 containerd[1603]: time="2025-11-05T00:20:15.700395182Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:20:15.701551 containerd[1603]: time="2025-11-05T00:20:15.701520271Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 5 00:20:15.702220 containerd[1603]: time="2025-11-05T00:20:15.701849085Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:20:15.703392 containerd[1603]: time="2025-11-05T00:20:15.703292097Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.21348318s" Nov 5 00:20:15.703392 containerd[1603]: time="2025-11-05T00:20:15.703320156Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 5 00:20:15.705503 containerd[1603]: time="2025-11-05T00:20:15.705412406Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 5 00:20:15.707978 containerd[1603]: time="2025-11-05T00:20:15.707948977Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 00:20:15.715206 containerd[1603]: time="2025-11-05T00:20:15.715093189Z" level=info msg="Container 1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:15.728507 containerd[1603]: time="2025-11-05T00:20:15.728436001Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\"" Nov 5 00:20:15.729257 containerd[1603]: time="2025-11-05T00:20:15.729228046Z" level=info msg="StartContainer for \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\"" Nov 5 00:20:15.729889 containerd[1603]: time="2025-11-05T00:20:15.729861804Z" level=info msg="connecting to shim 1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da" address="unix:///run/containerd/s/645050799b8a86a5741c6dc702b6134d322621d7bd1643f5a4d1b0197a8605b2" protocol=ttrpc version=3 Nov 5 00:20:15.760326 systemd[1]: Started cri-containerd-1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da.scope - libcontainer container 1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da. Nov 5 00:20:15.803014 containerd[1603]: time="2025-11-05T00:20:15.802423252Z" level=info msg="StartContainer for \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\" returns successfully" Nov 5 00:20:15.818608 systemd[1]: cri-containerd-1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da.scope: Deactivated successfully. Nov 5 00:20:15.822391 containerd[1603]: time="2025-11-05T00:20:15.822337848Z" level=info msg="received exit event container_id:\"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\" id:\"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\" pid:3210 exited_at:{seconds:1762302015 nanos:821753979}" Nov 5 00:20:15.822588 containerd[1603]: time="2025-11-05T00:20:15.822562763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\" id:\"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\" pid:3210 exited_at:{seconds:1762302015 nanos:821753979}" Nov 5 00:20:15.845115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da-rootfs.mount: Deactivated successfully. Nov 5 00:20:16.066769 kubelet[2792]: E1105 00:20:16.066652 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:16.074925 containerd[1603]: time="2025-11-05T00:20:16.074687735Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 00:20:16.082657 containerd[1603]: time="2025-11-05T00:20:16.082611652Z" level=info msg="Container 261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:16.087614 containerd[1603]: time="2025-11-05T00:20:16.087568203Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\"" Nov 5 00:20:16.088014 containerd[1603]: time="2025-11-05T00:20:16.087987455Z" level=info msg="StartContainer for \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\"" Nov 5 00:20:16.088746 containerd[1603]: time="2025-11-05T00:20:16.088703382Z" level=info msg="connecting to shim 261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726" address="unix:///run/containerd/s/645050799b8a86a5741c6dc702b6134d322621d7bd1643f5a4d1b0197a8605b2" protocol=ttrpc version=3 Nov 5 00:20:16.113317 systemd[1]: Started cri-containerd-261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726.scope - libcontainer container 261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726. Nov 5 00:20:16.154967 containerd[1603]: time="2025-11-05T00:20:16.154915745Z" level=info msg="StartContainer for \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\" returns successfully" Nov 5 00:20:16.169457 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 00:20:16.170499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:20:16.170918 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:20:16.173413 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 00:20:16.175382 systemd[1]: cri-containerd-261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726.scope: Deactivated successfully. Nov 5 00:20:16.176472 containerd[1603]: time="2025-11-05T00:20:16.176362778Z" level=info msg="received exit event container_id:\"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\" id:\"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\" pid:3252 exited_at:{seconds:1762302016 nanos:175603792}" Nov 5 00:20:16.176857 containerd[1603]: time="2025-11-05T00:20:16.176786460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\" id:\"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\" pid:3252 exited_at:{seconds:1762302016 nanos:175603792}" Nov 5 00:20:16.197787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 00:20:16.704289 kubelet[2792]: E1105 00:20:16.704231 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:16.963415 update_engine[1576]: I20251105 00:20:16.963145 1576 update_attempter.cc:509] Updating boot flags... Nov 5 00:20:17.080990 kubelet[2792]: E1105 00:20:17.079877 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:17.080990 kubelet[2792]: E1105 00:20:17.080574 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:17.091131 containerd[1603]: time="2025-11-05T00:20:17.089627164Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 00:20:17.146015 containerd[1603]: time="2025-11-05T00:20:17.145930352Z" level=info msg="Container 92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:17.146435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093426786.mount: Deactivated successfully. Nov 5 00:20:17.173811 containerd[1603]: time="2025-11-05T00:20:17.173542735Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\"" Nov 5 00:20:17.178855 containerd[1603]: time="2025-11-05T00:20:17.178828545Z" level=info msg="StartContainer for \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\"" Nov 5 00:20:17.183584 containerd[1603]: time="2025-11-05T00:20:17.183550645Z" level=info msg="connecting to shim 92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582" address="unix:///run/containerd/s/645050799b8a86a5741c6dc702b6134d322621d7bd1643f5a4d1b0197a8605b2" protocol=ttrpc version=3 Nov 5 00:20:17.215349 systemd[1]: Started cri-containerd-92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582.scope - libcontainer container 92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582. Nov 5 00:20:17.442494 systemd[1]: cri-containerd-92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582.scope: Deactivated successfully. Nov 5 00:20:17.452278 containerd[1603]: time="2025-11-05T00:20:17.446305532Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc3a528_f94c_454d_b240_d94a845ce41f.slice/cri-containerd-92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582.scope/memory.events\": no such file or directory" Nov 5 00:20:17.454033 containerd[1603]: time="2025-11-05T00:20:17.452958689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\" id:\"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\" pid:3327 exited_at:{seconds:1762302017 nanos:442675763}" Nov 5 00:20:17.461097 containerd[1603]: time="2025-11-05T00:20:17.460621280Z" level=info msg="received exit event container_id:\"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\" id:\"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\" pid:3327 exited_at:{seconds:1762302017 nanos:442675763}" Nov 5 00:20:17.467823 containerd[1603]: time="2025-11-05T00:20:17.467745289Z" level=info msg="StartContainer for \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\" returns successfully" Nov 5 00:20:17.543009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582-rootfs.mount: Deactivated successfully. Nov 5 00:20:17.936331 containerd[1603]: time="2025-11-05T00:20:17.936286514Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:20:17.937049 containerd[1603]: time="2025-11-05T00:20:17.936982343Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 5 00:20:17.938056 containerd[1603]: time="2025-11-05T00:20:17.937571263Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 00:20:17.939332 containerd[1603]: time="2025-11-05T00:20:17.939295654Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.233853708s" Nov 5 00:20:17.939387 containerd[1603]: time="2025-11-05T00:20:17.939331943Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 5 00:20:17.944339 containerd[1603]: time="2025-11-05T00:20:17.944311888Z" level=info msg="CreateContainer within sandbox \"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 5 00:20:17.951586 containerd[1603]: time="2025-11-05T00:20:17.951565316Z" level=info msg="Container fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:17.961582 containerd[1603]: time="2025-11-05T00:20:17.961543037Z" level=info msg="CreateContainer within sandbox \"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\"" Nov 5 00:20:17.962019 containerd[1603]: time="2025-11-05T00:20:17.961986590Z" level=info msg="StartContainer for \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\"" Nov 5 00:20:17.962880 containerd[1603]: time="2025-11-05T00:20:17.962847966Z" level=info msg="connecting to shim fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8" address="unix:///run/containerd/s/4402567a6066d21b8faf3001c6ff0901c9f06ee1007066366860554118770f1b" protocol=ttrpc version=3 Nov 5 00:20:17.991319 systemd[1]: Started cri-containerd-fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8.scope - libcontainer container fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8. Nov 5 00:20:18.034663 containerd[1603]: time="2025-11-05T00:20:18.034601359Z" level=info msg="StartContainer for \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" returns successfully" Nov 5 00:20:18.085515 kubelet[2792]: E1105 00:20:18.085486 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:18.090305 kubelet[2792]: E1105 00:20:18.089847 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:18.094706 containerd[1603]: time="2025-11-05T00:20:18.094677978Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 00:20:18.101170 kubelet[2792]: I1105 00:20:18.101021 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xjwnf" podStartSLOduration=0.745029343 podStartE2EDuration="9.101010048s" podCreationTimestamp="2025-11-05 00:20:09 +0000 UTC" firstStartedPulling="2025-11-05 00:20:09.584618057 +0000 UTC m=+5.745703367" lastFinishedPulling="2025-11-05 00:20:17.940598762 +0000 UTC m=+14.101684072" observedRunningTime="2025-11-05 00:20:18.099437812 +0000 UTC m=+14.260523122" watchObservedRunningTime="2025-11-05 00:20:18.101010048 +0000 UTC m=+14.262095378" Nov 5 00:20:18.106248 containerd[1603]: time="2025-11-05T00:20:18.105992498Z" level=info msg="Container 5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:18.115583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234705909.mount: Deactivated successfully. Nov 5 00:20:18.119201 containerd[1603]: time="2025-11-05T00:20:18.118718668Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\"" Nov 5 00:20:18.120491 containerd[1603]: time="2025-11-05T00:20:18.120472839Z" level=info msg="StartContainer for \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\"" Nov 5 00:20:18.122781 containerd[1603]: time="2025-11-05T00:20:18.122758864Z" level=info msg="connecting to shim 5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699" address="unix:///run/containerd/s/645050799b8a86a5741c6dc702b6134d322621d7bd1643f5a4d1b0197a8605b2" protocol=ttrpc version=3 Nov 5 00:20:18.168328 systemd[1]: Started cri-containerd-5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699.scope - libcontainer container 5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699. Nov 5 00:20:18.241649 systemd[1]: cri-containerd-5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699.scope: Deactivated successfully. Nov 5 00:20:18.245692 containerd[1603]: time="2025-11-05T00:20:18.245641519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\" id:\"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\" pid:3411 exited_at:{seconds:1762302018 nanos:244962570}" Nov 5 00:20:18.246385 containerd[1603]: time="2025-11-05T00:20:18.246336967Z" level=info msg="received exit event container_id:\"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\" id:\"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\" pid:3411 exited_at:{seconds:1762302018 nanos:244962570}" Nov 5 00:20:18.261634 containerd[1603]: time="2025-11-05T00:20:18.261600256Z" level=info msg="StartContainer for \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\" returns successfully" Nov 5 00:20:19.098855 kubelet[2792]: E1105 00:20:19.097573 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:19.098855 kubelet[2792]: E1105 00:20:19.098054 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:19.104283 containerd[1603]: time="2025-11-05T00:20:19.103457179Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 00:20:19.120027 containerd[1603]: time="2025-11-05T00:20:19.120002813Z" level=info msg="Container f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:19.132399 containerd[1603]: time="2025-11-05T00:20:19.132342251Z" level=info msg="CreateContainer within sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\"" Nov 5 00:20:19.133367 containerd[1603]: time="2025-11-05T00:20:19.133278927Z" level=info msg="StartContainer for \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\"" Nov 5 00:20:19.135163 containerd[1603]: time="2025-11-05T00:20:19.135102929Z" level=info msg="connecting to shim f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc" address="unix:///run/containerd/s/645050799b8a86a5741c6dc702b6134d322621d7bd1643f5a4d1b0197a8605b2" protocol=ttrpc version=3 Nov 5 00:20:19.166383 systemd[1]: Started cri-containerd-f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc.scope - libcontainer container f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc. Nov 5 00:20:19.221806 containerd[1603]: time="2025-11-05T00:20:19.221756227Z" level=info msg="StartContainer for \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" returns successfully" Nov 5 00:20:19.313916 containerd[1603]: time="2025-11-05T00:20:19.313778605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" id:\"5490c49488003c6fb11a0bb74880772c89bf1e0507c564421aaf40c84bd937e8\" pid:3480 exited_at:{seconds:1762302019 nanos:313331432}" Nov 5 00:20:19.352414 kubelet[2792]: I1105 00:20:19.351925 2792 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 00:20:19.389695 kubelet[2792]: I1105 00:20:19.389514 2792 status_manager.go:895] "Failed to get status for pod" podUID="9298a89a-bc0e-4784-8f3f-847e0d1790ea" pod="kube-system/coredns-674b8bbfcf-xmrwm" err="pods \"coredns-674b8bbfcf-xmrwm\" is forbidden: User \"system:node:172-234-219-54\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-219-54' and this object" Nov 5 00:20:19.395328 systemd[1]: Created slice kubepods-burstable-pod9298a89a_bc0e_4784_8f3f_847e0d1790ea.slice - libcontainer container kubepods-burstable-pod9298a89a_bc0e_4784_8f3f_847e0d1790ea.slice. Nov 5 00:20:19.407719 systemd[1]: Created slice kubepods-burstable-podfcd3789c_24e8_4cc0_8401_3e95c09682df.slice - libcontainer container kubepods-burstable-podfcd3789c_24e8_4cc0_8401_3e95c09682df.slice. Nov 5 00:20:19.477676 kubelet[2792]: I1105 00:20:19.477641 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhdjb\" (UniqueName: \"kubernetes.io/projected/9298a89a-bc0e-4784-8f3f-847e0d1790ea-kube-api-access-qhdjb\") pod \"coredns-674b8bbfcf-xmrwm\" (UID: \"9298a89a-bc0e-4784-8f3f-847e0d1790ea\") " pod="kube-system/coredns-674b8bbfcf-xmrwm" Nov 5 00:20:19.477972 kubelet[2792]: I1105 00:20:19.477822 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcd3789c-24e8-4cc0-8401-3e95c09682df-config-volume\") pod \"coredns-674b8bbfcf-lv9rg\" (UID: \"fcd3789c-24e8-4cc0-8401-3e95c09682df\") " pod="kube-system/coredns-674b8bbfcf-lv9rg" Nov 5 00:20:19.477972 kubelet[2792]: I1105 00:20:19.477844 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25h9j\" (UniqueName: \"kubernetes.io/projected/fcd3789c-24e8-4cc0-8401-3e95c09682df-kube-api-access-25h9j\") pod \"coredns-674b8bbfcf-lv9rg\" (UID: \"fcd3789c-24e8-4cc0-8401-3e95c09682df\") " pod="kube-system/coredns-674b8bbfcf-lv9rg" Nov 5 00:20:19.477972 kubelet[2792]: I1105 00:20:19.477864 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9298a89a-bc0e-4784-8f3f-847e0d1790ea-config-volume\") pod \"coredns-674b8bbfcf-xmrwm\" (UID: \"9298a89a-bc0e-4784-8f3f-847e0d1790ea\") " pod="kube-system/coredns-674b8bbfcf-xmrwm" Nov 5 00:20:19.701888 kubelet[2792]: E1105 00:20:19.701750 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:19.702836 containerd[1603]: time="2025-11-05T00:20:19.702789275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmrwm,Uid:9298a89a-bc0e-4784-8f3f-847e0d1790ea,Namespace:kube-system,Attempt:0,}" Nov 5 00:20:19.715410 kubelet[2792]: E1105 00:20:19.715105 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:19.717123 containerd[1603]: time="2025-11-05T00:20:19.716854367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lv9rg,Uid:fcd3789c-24e8-4cc0-8401-3e95c09682df,Namespace:kube-system,Attempt:0,}" Nov 5 00:20:20.105463 kubelet[2792]: E1105 00:20:20.105421 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:20.125277 kubelet[2792]: I1105 00:20:20.125165 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7b2mw" podStartSLOduration=5.909885373 podStartE2EDuration="12.125153311s" podCreationTimestamp="2025-11-05 00:20:08 +0000 UTC" firstStartedPulling="2025-11-05 00:20:09.489070428 +0000 UTC m=+5.650155738" lastFinishedPulling="2025-11-05 00:20:15.704338366 +0000 UTC m=+11.865423676" observedRunningTime="2025-11-05 00:20:20.124106266 +0000 UTC m=+16.285191576" watchObservedRunningTime="2025-11-05 00:20:20.125153311 +0000 UTC m=+16.286238621" Nov 5 00:20:21.108860 kubelet[2792]: E1105 00:20:21.108387 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:21.667332 systemd-networkd[1503]: cilium_host: Link UP Nov 5 00:20:21.668321 systemd-networkd[1503]: cilium_net: Link UP Nov 5 00:20:21.668522 systemd-networkd[1503]: cilium_host: Gained carrier Nov 5 00:20:21.668704 systemd-networkd[1503]: cilium_net: Gained carrier Nov 5 00:20:21.795977 systemd-networkd[1503]: cilium_vxlan: Link UP Nov 5 00:20:21.795986 systemd-networkd[1503]: cilium_vxlan: Gained carrier Nov 5 00:20:22.046221 kernel: NET: Registered PF_ALG protocol family Nov 5 00:20:22.109994 kubelet[2792]: E1105 00:20:22.109957 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:22.188476 systemd-networkd[1503]: cilium_host: Gained IPv6LL Nov 5 00:20:22.442319 systemd-networkd[1503]: cilium_net: Gained IPv6LL Nov 5 00:20:22.771870 systemd-networkd[1503]: lxc_health: Link UP Nov 5 00:20:22.774325 systemd-networkd[1503]: lxc_health: Gained carrier Nov 5 00:20:23.249698 systemd-networkd[1503]: lxcb5ef05f47e70: Link UP Nov 5 00:20:23.258308 kernel: eth0: renamed from tmp18a6c Nov 5 00:20:23.270168 systemd-networkd[1503]: lxcb5ef05f47e70: Gained carrier Nov 5 00:20:23.272451 systemd-networkd[1503]: lxc7d7bd2a214da: Link UP Nov 5 00:20:23.279499 kernel: eth0: renamed from tmp56300 Nov 5 00:20:23.285222 systemd-networkd[1503]: lxc7d7bd2a214da: Gained carrier Nov 5 00:20:23.343537 kubelet[2792]: E1105 00:20:23.343507 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:23.596355 systemd-networkd[1503]: cilium_vxlan: Gained IPv6LL Nov 5 00:20:23.787290 systemd-networkd[1503]: lxc_health: Gained IPv6LL Nov 5 00:20:24.874370 systemd-networkd[1503]: lxc7d7bd2a214da: Gained IPv6LL Nov 5 00:20:25.066473 systemd-networkd[1503]: lxcb5ef05f47e70: Gained IPv6LL Nov 5 00:20:26.670454 containerd[1603]: time="2025-11-05T00:20:26.670373039Z" level=info msg="connecting to shim 56300589b3fa29437386be984a433a9a1bbaca7456eeba2e71db0087591693fc" address="unix:///run/containerd/s/7a5ccea35685137234b2a9f88604722024f01af9bbf57157bbc0f816a18c3420" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:20:26.735381 systemd[1]: Started cri-containerd-56300589b3fa29437386be984a433a9a1bbaca7456eeba2e71db0087591693fc.scope - libcontainer container 56300589b3fa29437386be984a433a9a1bbaca7456eeba2e71db0087591693fc. Nov 5 00:20:26.769682 containerd[1603]: time="2025-11-05T00:20:26.769586872Z" level=info msg="connecting to shim 18a6c175529155269883c8e1b918ab9d3c1c2b661a85fc734e9ce34a4cfb12ce" address="unix:///run/containerd/s/725e26f80b13a01c927108d0ddb56783ab26c87f692943d846ff689d126e2a7e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:20:26.822596 systemd[1]: Started cri-containerd-18a6c175529155269883c8e1b918ab9d3c1c2b661a85fc734e9ce34a4cfb12ce.scope - libcontainer container 18a6c175529155269883c8e1b918ab9d3c1c2b661a85fc734e9ce34a4cfb12ce. Nov 5 00:20:26.894779 containerd[1603]: time="2025-11-05T00:20:26.894725006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lv9rg,Uid:fcd3789c-24e8-4cc0-8401-3e95c09682df,Namespace:kube-system,Attempt:0,} returns sandbox id \"56300589b3fa29437386be984a433a9a1bbaca7456eeba2e71db0087591693fc\"" Nov 5 00:20:26.897560 kubelet[2792]: E1105 00:20:26.897504 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:26.903935 containerd[1603]: time="2025-11-05T00:20:26.903890361Z" level=info msg="CreateContainer within sandbox \"56300589b3fa29437386be984a433a9a1bbaca7456eeba2e71db0087591693fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 00:20:26.934795 containerd[1603]: time="2025-11-05T00:20:26.931794093Z" level=info msg="Container 07a9eec385c9c0350428cac9818724ad6285d4bf4bdd6462298944b16d0ae870: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:26.942757 containerd[1603]: time="2025-11-05T00:20:26.942729562Z" level=info msg="CreateContainer within sandbox \"56300589b3fa29437386be984a433a9a1bbaca7456eeba2e71db0087591693fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07a9eec385c9c0350428cac9818724ad6285d4bf4bdd6462298944b16d0ae870\"" Nov 5 00:20:26.943550 containerd[1603]: time="2025-11-05T00:20:26.943527245Z" level=info msg="StartContainer for \"07a9eec385c9c0350428cac9818724ad6285d4bf4bdd6462298944b16d0ae870\"" Nov 5 00:20:26.945266 containerd[1603]: time="2025-11-05T00:20:26.945245679Z" level=info msg="connecting to shim 07a9eec385c9c0350428cac9818724ad6285d4bf4bdd6462298944b16d0ae870" address="unix:///run/containerd/s/7a5ccea35685137234b2a9f88604722024f01af9bbf57157bbc0f816a18c3420" protocol=ttrpc version=3 Nov 5 00:20:26.946972 containerd[1603]: time="2025-11-05T00:20:26.946910004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xmrwm,Uid:9298a89a-bc0e-4784-8f3f-847e0d1790ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"18a6c175529155269883c8e1b918ab9d3c1c2b661a85fc734e9ce34a4cfb12ce\"" Nov 5 00:20:26.948376 kubelet[2792]: E1105 00:20:26.948341 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:26.957104 containerd[1603]: time="2025-11-05T00:20:26.957063510Z" level=info msg="CreateContainer within sandbox \"18a6c175529155269883c8e1b918ab9d3c1c2b661a85fc734e9ce34a4cfb12ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 00:20:26.967145 containerd[1603]: time="2025-11-05T00:20:26.967078847Z" level=info msg="Container 09a24a86b0b511a7620ad18b59a403fa536b980d8b01ee80b258262e7f964374: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:20:26.980477 systemd[1]: Started cri-containerd-07a9eec385c9c0350428cac9818724ad6285d4bf4bdd6462298944b16d0ae870.scope - libcontainer container 07a9eec385c9c0350428cac9818724ad6285d4bf4bdd6462298944b16d0ae870. Nov 5 00:20:26.984510 containerd[1603]: time="2025-11-05T00:20:26.984407426Z" level=info msg="CreateContainer within sandbox \"18a6c175529155269883c8e1b918ab9d3c1c2b661a85fc734e9ce34a4cfb12ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09a24a86b0b511a7620ad18b59a403fa536b980d8b01ee80b258262e7f964374\"" Nov 5 00:20:26.985681 containerd[1603]: time="2025-11-05T00:20:26.985640175Z" level=info msg="StartContainer for \"09a24a86b0b511a7620ad18b59a403fa536b980d8b01ee80b258262e7f964374\"" Nov 5 00:20:26.988917 containerd[1603]: time="2025-11-05T00:20:26.988874656Z" level=info msg="connecting to shim 09a24a86b0b511a7620ad18b59a403fa536b980d8b01ee80b258262e7f964374" address="unix:///run/containerd/s/725e26f80b13a01c927108d0ddb56783ab26c87f692943d846ff689d126e2a7e" protocol=ttrpc version=3 Nov 5 00:20:27.016338 systemd[1]: Started cri-containerd-09a24a86b0b511a7620ad18b59a403fa536b980d8b01ee80b258262e7f964374.scope - libcontainer container 09a24a86b0b511a7620ad18b59a403fa536b980d8b01ee80b258262e7f964374. Nov 5 00:20:27.044297 containerd[1603]: time="2025-11-05T00:20:27.044135561Z" level=info msg="StartContainer for \"07a9eec385c9c0350428cac9818724ad6285d4bf4bdd6462298944b16d0ae870\" returns successfully" Nov 5 00:20:27.069529 containerd[1603]: time="2025-11-05T00:20:27.069473232Z" level=info msg="StartContainer for \"09a24a86b0b511a7620ad18b59a403fa536b980d8b01ee80b258262e7f964374\" returns successfully" Nov 5 00:20:27.131219 kubelet[2792]: E1105 00:20:27.130940 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:27.134811 kubelet[2792]: E1105 00:20:27.134533 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:27.152712 kubelet[2792]: I1105 00:20:27.152423 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xmrwm" podStartSLOduration=18.152408666 podStartE2EDuration="18.152408666s" podCreationTimestamp="2025-11-05 00:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:20:27.150321684 +0000 UTC m=+23.311407014" watchObservedRunningTime="2025-11-05 00:20:27.152408666 +0000 UTC m=+23.313493986" Nov 5 00:20:27.177924 kubelet[2792]: I1105 00:20:27.177526 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lv9rg" podStartSLOduration=18.17750449 podStartE2EDuration="18.17750449s" podCreationTimestamp="2025-11-05 00:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:20:27.174857163 +0000 UTC m=+23.335942493" watchObservedRunningTime="2025-11-05 00:20:27.17750449 +0000 UTC m=+23.338589800" Nov 5 00:20:27.656271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801179735.mount: Deactivated successfully. Nov 5 00:20:28.136877 kubelet[2792]: E1105 00:20:28.136793 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:28.138300 kubelet[2792]: E1105 00:20:28.137413 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:29.139335 kubelet[2792]: E1105 00:20:29.139218 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:29.139335 kubelet[2792]: E1105 00:20:29.139281 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:33.556708 kubelet[2792]: I1105 00:20:33.555728 2792 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 00:20:33.559996 kubelet[2792]: E1105 00:20:33.559481 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:20:34.154275 kubelet[2792]: E1105 00:20:34.153854 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:21:21.989304 kubelet[2792]: E1105 00:21:21.988591 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:21:33.988693 kubelet[2792]: E1105 00:21:33.987951 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:21:34.988406 kubelet[2792]: E1105 00:21:34.988354 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:21:35.988687 kubelet[2792]: E1105 00:21:35.988372 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:21:40.988054 kubelet[2792]: E1105 00:21:40.987994 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:21:47.989120 kubelet[2792]: E1105 00:21:47.988526 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:21:58.989171 kubelet[2792]: E1105 00:21:58.988784 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:22:03.988542 kubelet[2792]: E1105 00:22:03.988123 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:22:30.907600 systemd[1]: Started sshd@7-172.234.219.54:22-139.178.68.195:56808.service - OpenSSH per-connection server daemon (139.178.68.195:56808). Nov 5 00:22:31.242650 sshd[4122]: Accepted publickey for core from 139.178.68.195 port 56808 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:22:31.245124 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:22:31.253942 systemd-logind[1575]: New session 8 of user core. Nov 5 00:22:31.261334 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 00:22:31.578079 sshd[4125]: Connection closed by 139.178.68.195 port 56808 Nov 5 00:22:31.578699 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Nov 5 00:22:31.584156 systemd[1]: sshd@7-172.234.219.54:22-139.178.68.195:56808.service: Deactivated successfully. Nov 5 00:22:31.590455 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 00:22:31.591804 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Nov 5 00:22:31.593595 systemd-logind[1575]: Removed session 8. Nov 5 00:22:36.645032 systemd[1]: Started sshd@8-172.234.219.54:22-139.178.68.195:43592.service - OpenSSH per-connection server daemon (139.178.68.195:43592). Nov 5 00:22:36.999642 sshd[4138]: Accepted publickey for core from 139.178.68.195 port 43592 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:22:37.001416 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:22:37.007256 systemd-logind[1575]: New session 9 of user core. Nov 5 00:22:37.022338 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 00:22:37.311059 sshd[4141]: Connection closed by 139.178.68.195 port 43592 Nov 5 00:22:37.311683 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Nov 5 00:22:37.316208 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Nov 5 00:22:37.316612 systemd[1]: sshd@8-172.234.219.54:22-139.178.68.195:43592.service: Deactivated successfully. Nov 5 00:22:37.318635 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 00:22:37.320595 systemd-logind[1575]: Removed session 9. Nov 5 00:22:42.386566 systemd[1]: Started sshd@9-172.234.219.54:22-139.178.68.195:43596.service - OpenSSH per-connection server daemon (139.178.68.195:43596). Nov 5 00:22:42.744045 sshd[4156]: Accepted publickey for core from 139.178.68.195 port 43596 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:22:42.744577 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:22:42.749241 systemd-logind[1575]: New session 10 of user core. Nov 5 00:22:42.756625 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 00:22:43.064812 sshd[4159]: Connection closed by 139.178.68.195 port 43596 Nov 5 00:22:43.065830 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Nov 5 00:22:43.072053 systemd[1]: sshd@9-172.234.219.54:22-139.178.68.195:43596.service: Deactivated successfully. Nov 5 00:22:43.074926 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 00:22:43.076380 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Nov 5 00:22:43.078633 systemd-logind[1575]: Removed session 10. Nov 5 00:22:43.988218 kubelet[2792]: E1105 00:22:43.988149 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:22:48.125917 systemd[1]: Started sshd@10-172.234.219.54:22-139.178.68.195:40154.service - OpenSSH per-connection server daemon (139.178.68.195:40154). Nov 5 00:22:48.472496 sshd[4171]: Accepted publickey for core from 139.178.68.195 port 40154 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:22:48.474306 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:22:48.479887 systemd-logind[1575]: New session 11 of user core. Nov 5 00:22:48.487343 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 00:22:48.778606 sshd[4174]: Connection closed by 139.178.68.195 port 40154 Nov 5 00:22:48.778823 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Nov 5 00:22:48.783344 systemd[1]: sshd@10-172.234.219.54:22-139.178.68.195:40154.service: Deactivated successfully. Nov 5 00:22:48.785630 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 00:22:48.787089 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Nov 5 00:22:48.788696 systemd-logind[1575]: Removed session 11. Nov 5 00:22:49.988885 kubelet[2792]: E1105 00:22:49.988174 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:22:53.860564 systemd[1]: Started sshd@11-172.234.219.54:22-139.178.68.195:60256.service - OpenSSH per-connection server daemon (139.178.68.195:60256). Nov 5 00:22:54.228764 sshd[4186]: Accepted publickey for core from 139.178.68.195 port 60256 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:22:54.231066 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:22:54.238232 systemd-logind[1575]: New session 12 of user core. Nov 5 00:22:54.241327 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 00:22:54.561693 sshd[4189]: Connection closed by 139.178.68.195 port 60256 Nov 5 00:22:54.562529 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Nov 5 00:22:54.567714 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Nov 5 00:22:54.568809 systemd[1]: sshd@11-172.234.219.54:22-139.178.68.195:60256.service: Deactivated successfully. Nov 5 00:22:54.571759 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 00:22:54.573434 systemd-logind[1575]: Removed session 12. Nov 5 00:22:54.988393 kubelet[2792]: E1105 00:22:54.988341 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:22:57.989154 kubelet[2792]: E1105 00:22:57.988239 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:22:58.987989 kubelet[2792]: E1105 00:22:58.987952 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:22:59.622964 systemd[1]: Started sshd@12-172.234.219.54:22-139.178.68.195:60260.service - OpenSSH per-connection server daemon (139.178.68.195:60260). Nov 5 00:22:59.972076 sshd[4202]: Accepted publickey for core from 139.178.68.195 port 60260 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:22:59.974239 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:22:59.980441 systemd-logind[1575]: New session 13 of user core. Nov 5 00:22:59.990330 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 00:23:00.293438 sshd[4205]: Connection closed by 139.178.68.195 port 60260 Nov 5 00:23:00.294198 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:00.300877 systemd[1]: sshd@12-172.234.219.54:22-139.178.68.195:60260.service: Deactivated successfully. Nov 5 00:23:00.304263 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 00:23:00.305505 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Nov 5 00:23:00.307595 systemd-logind[1575]: Removed session 13. Nov 5 00:23:00.367370 systemd[1]: Started sshd@13-172.234.219.54:22-139.178.68.195:60266.service - OpenSSH per-connection server daemon (139.178.68.195:60266). Nov 5 00:23:00.723591 sshd[4218]: Accepted publickey for core from 139.178.68.195 port 60266 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:00.725527 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:00.736962 systemd-logind[1575]: New session 14 of user core. Nov 5 00:23:00.742829 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 00:23:01.099066 sshd[4221]: Connection closed by 139.178.68.195 port 60266 Nov 5 00:23:01.099828 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:01.104718 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Nov 5 00:23:01.105399 systemd[1]: sshd@13-172.234.219.54:22-139.178.68.195:60266.service: Deactivated successfully. Nov 5 00:23:01.108289 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 00:23:01.110952 systemd-logind[1575]: Removed session 14. Nov 5 00:23:01.167696 systemd[1]: Started sshd@14-172.234.219.54:22-139.178.68.195:60276.service - OpenSSH per-connection server daemon (139.178.68.195:60276). Nov 5 00:23:01.531651 sshd[4232]: Accepted publickey for core from 139.178.68.195 port 60276 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:01.533912 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:01.541584 systemd-logind[1575]: New session 15 of user core. Nov 5 00:23:01.553338 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 00:23:01.856876 sshd[4235]: Connection closed by 139.178.68.195 port 60276 Nov 5 00:23:01.857416 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:01.863156 systemd[1]: sshd@14-172.234.219.54:22-139.178.68.195:60276.service: Deactivated successfully. Nov 5 00:23:01.866711 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 00:23:01.868455 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Nov 5 00:23:01.871515 systemd-logind[1575]: Removed session 15. Nov 5 00:23:06.920954 systemd[1]: Started sshd@15-172.234.219.54:22-139.178.68.195:38780.service - OpenSSH per-connection server daemon (139.178.68.195:38780). Nov 5 00:23:06.988849 kubelet[2792]: E1105 00:23:06.988816 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:07.254155 sshd[4248]: Accepted publickey for core from 139.178.68.195 port 38780 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:07.255742 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:07.261119 systemd-logind[1575]: New session 16 of user core. Nov 5 00:23:07.268311 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 00:23:07.560404 sshd[4251]: Connection closed by 139.178.68.195 port 38780 Nov 5 00:23:07.561313 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:07.566977 systemd[1]: sshd@15-172.234.219.54:22-139.178.68.195:38780.service: Deactivated successfully. Nov 5 00:23:07.569880 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 00:23:07.571081 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Nov 5 00:23:07.574015 systemd-logind[1575]: Removed session 16. Nov 5 00:23:08.988321 kubelet[2792]: E1105 00:23:08.988222 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:08.988321 kubelet[2792]: E1105 00:23:08.988256 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:12.634573 systemd[1]: Started sshd@16-172.234.219.54:22-139.178.68.195:38786.service - OpenSSH per-connection server daemon (139.178.68.195:38786). Nov 5 00:23:12.980121 sshd[4265]: Accepted publickey for core from 139.178.68.195 port 38786 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:12.981535 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:12.986051 systemd-logind[1575]: New session 17 of user core. Nov 5 00:23:12.993334 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 00:23:13.288779 sshd[4268]: Connection closed by 139.178.68.195 port 38786 Nov 5 00:23:13.289935 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:13.295879 systemd[1]: sshd@16-172.234.219.54:22-139.178.68.195:38786.service: Deactivated successfully. Nov 5 00:23:13.298269 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 00:23:13.300264 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Nov 5 00:23:13.301910 systemd-logind[1575]: Removed session 17. Nov 5 00:23:13.354776 systemd[1]: Started sshd@17-172.234.219.54:22-139.178.68.195:60550.service - OpenSSH per-connection server daemon (139.178.68.195:60550). Nov 5 00:23:13.721709 sshd[4280]: Accepted publickey for core from 139.178.68.195 port 60550 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:13.723288 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:13.729254 systemd-logind[1575]: New session 18 of user core. Nov 5 00:23:13.735348 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 00:23:14.052537 sshd[4283]: Connection closed by 139.178.68.195 port 60550 Nov 5 00:23:14.053286 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:14.057433 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Nov 5 00:23:14.057632 systemd[1]: sshd@17-172.234.219.54:22-139.178.68.195:60550.service: Deactivated successfully. Nov 5 00:23:14.059755 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 00:23:14.061331 systemd-logind[1575]: Removed session 18. Nov 5 00:23:14.112518 systemd[1]: Started sshd@18-172.234.219.54:22-139.178.68.195:60552.service - OpenSSH per-connection server daemon (139.178.68.195:60552). Nov 5 00:23:14.449636 sshd[4293]: Accepted publickey for core from 139.178.68.195 port 60552 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:14.451079 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:14.456416 systemd-logind[1575]: New session 19 of user core. Nov 5 00:23:14.464325 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 00:23:15.309441 sshd[4296]: Connection closed by 139.178.68.195 port 60552 Nov 5 00:23:15.310050 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:15.314235 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Nov 5 00:23:15.314908 systemd[1]: sshd@18-172.234.219.54:22-139.178.68.195:60552.service: Deactivated successfully. Nov 5 00:23:15.317492 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 00:23:15.319231 systemd-logind[1575]: Removed session 19. Nov 5 00:23:15.371696 systemd[1]: Started sshd@19-172.234.219.54:22-139.178.68.195:60562.service - OpenSSH per-connection server daemon (139.178.68.195:60562). Nov 5 00:23:15.729023 sshd[4313]: Accepted publickey for core from 139.178.68.195 port 60562 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:15.731137 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:15.738119 systemd-logind[1575]: New session 20 of user core. Nov 5 00:23:15.745629 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 00:23:16.147261 sshd[4316]: Connection closed by 139.178.68.195 port 60562 Nov 5 00:23:16.149412 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:16.154326 systemd[1]: sshd@19-172.234.219.54:22-139.178.68.195:60562.service: Deactivated successfully. Nov 5 00:23:16.156667 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 00:23:16.158384 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Nov 5 00:23:16.159635 systemd-logind[1575]: Removed session 20. Nov 5 00:23:16.213640 systemd[1]: Started sshd@20-172.234.219.54:22-139.178.68.195:60572.service - OpenSSH per-connection server daemon (139.178.68.195:60572). Nov 5 00:23:16.560809 sshd[4326]: Accepted publickey for core from 139.178.68.195 port 60572 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:16.563322 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:16.571468 systemd-logind[1575]: New session 21 of user core. Nov 5 00:23:16.580464 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 00:23:16.877216 sshd[4329]: Connection closed by 139.178.68.195 port 60572 Nov 5 00:23:16.877863 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:16.883105 systemd[1]: sshd@20-172.234.219.54:22-139.178.68.195:60572.service: Deactivated successfully. Nov 5 00:23:16.886397 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 00:23:16.889982 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Nov 5 00:23:16.891055 systemd-logind[1575]: Removed session 21. Nov 5 00:23:21.946406 systemd[1]: Started sshd@21-172.234.219.54:22-139.178.68.195:60578.service - OpenSSH per-connection server daemon (139.178.68.195:60578). Nov 5 00:23:22.311279 sshd[4340]: Accepted publickey for core from 139.178.68.195 port 60578 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:22.312896 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:22.319758 systemd-logind[1575]: New session 22 of user core. Nov 5 00:23:22.326337 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 00:23:22.626829 sshd[4343]: Connection closed by 139.178.68.195 port 60578 Nov 5 00:23:22.627612 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:22.637374 systemd[1]: sshd@21-172.234.219.54:22-139.178.68.195:60578.service: Deactivated successfully. Nov 5 00:23:22.643078 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 00:23:22.643914 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Nov 5 00:23:22.645648 systemd-logind[1575]: Removed session 22. Nov 5 00:23:27.698449 systemd[1]: Started sshd@22-172.234.219.54:22-139.178.68.195:55090.service - OpenSSH per-connection server daemon (139.178.68.195:55090). Nov 5 00:23:28.046473 sshd[4357]: Accepted publickey for core from 139.178.68.195 port 55090 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:28.048910 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:28.055219 systemd-logind[1575]: New session 23 of user core. Nov 5 00:23:28.062350 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 00:23:28.361605 sshd[4360]: Connection closed by 139.178.68.195 port 55090 Nov 5 00:23:28.362388 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:28.367877 systemd[1]: sshd@22-172.234.219.54:22-139.178.68.195:55090.service: Deactivated successfully. Nov 5 00:23:28.369905 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 00:23:28.371534 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Nov 5 00:23:28.373064 systemd-logind[1575]: Removed session 23. Nov 5 00:23:33.429338 systemd[1]: Started sshd@23-172.234.219.54:22-139.178.68.195:49164.service - OpenSSH per-connection server daemon (139.178.68.195:49164). Nov 5 00:23:33.794886 sshd[4373]: Accepted publickey for core from 139.178.68.195 port 49164 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:33.796954 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:33.805248 systemd-logind[1575]: New session 24 of user core. Nov 5 00:23:33.811339 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 00:23:34.101675 sshd[4376]: Connection closed by 139.178.68.195 port 49164 Nov 5 00:23:34.103354 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:34.107495 systemd[1]: sshd@23-172.234.219.54:22-139.178.68.195:49164.service: Deactivated successfully. Nov 5 00:23:34.109376 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 00:23:34.110692 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Nov 5 00:23:34.112527 systemd-logind[1575]: Removed session 24. Nov 5 00:23:34.164429 systemd[1]: Started sshd@24-172.234.219.54:22-139.178.68.195:49174.service - OpenSSH per-connection server daemon (139.178.68.195:49174). Nov 5 00:23:34.514324 sshd[4388]: Accepted publickey for core from 139.178.68.195 port 49174 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:34.516845 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:34.524275 systemd-logind[1575]: New session 25 of user core. Nov 5 00:23:34.529354 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 00:23:36.040146 containerd[1603]: time="2025-11-05T00:23:36.039978057Z" level=info msg="StopContainer for \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" with timeout 30 (s)" Nov 5 00:23:36.041141 containerd[1603]: time="2025-11-05T00:23:36.041051169Z" level=info msg="Stop container \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" with signal terminated" Nov 5 00:23:36.063255 systemd[1]: cri-containerd-fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8.scope: Deactivated successfully. Nov 5 00:23:36.069894 containerd[1603]: time="2025-11-05T00:23:36.069839496Z" level=info msg="received exit event container_id:\"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" id:\"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" pid:3379 exited_at:{seconds:1762302216 nanos:69165532}" Nov 5 00:23:36.069974 containerd[1603]: time="2025-11-05T00:23:36.069943785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" id:\"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" pid:3379 exited_at:{seconds:1762302216 nanos:69165532}" Nov 5 00:23:36.073255 containerd[1603]: time="2025-11-05T00:23:36.072906371Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 00:23:36.080676 containerd[1603]: time="2025-11-05T00:23:36.080607996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" id:\"ce9b6b2ae2d2d76707e46e46c616ef52e70699f0be78cd09324598f914588512\" pid:4416 exited_at:{seconds:1762302216 nanos:80289249}" Nov 5 00:23:36.085997 containerd[1603]: time="2025-11-05T00:23:36.085963450Z" level=info msg="StopContainer for \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" with timeout 2 (s)" Nov 5 00:23:36.086359 containerd[1603]: time="2025-11-05T00:23:36.086332868Z" level=info msg="Stop container \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" with signal terminated" Nov 5 00:23:36.099897 systemd-networkd[1503]: lxc_health: Link DOWN Nov 5 00:23:36.099923 systemd-networkd[1503]: lxc_health: Lost carrier Nov 5 00:23:36.100872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8-rootfs.mount: Deactivated successfully. Nov 5 00:23:36.127474 containerd[1603]: time="2025-11-05T00:23:36.127435431Z" level=info msg="StopContainer for \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" returns successfully" Nov 5 00:23:36.129008 containerd[1603]: time="2025-11-05T00:23:36.128781290Z" level=info msg="StopPodSandbox for \"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\"" Nov 5 00:23:36.129008 containerd[1603]: time="2025-11-05T00:23:36.128835600Z" level=info msg="Container to stop \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:23:36.130324 systemd[1]: cri-containerd-f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc.scope: Deactivated successfully. Nov 5 00:23:36.130658 systemd[1]: cri-containerd-f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc.scope: Consumed 7.135s CPU time, 124.4M memory peak, 136K read from disk, 13.3M written to disk. Nov 5 00:23:36.134976 containerd[1603]: time="2025-11-05T00:23:36.134689471Z" level=info msg="received exit event container_id:\"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" id:\"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" pid:3449 exited_at:{seconds:1762302216 nanos:133580870}" Nov 5 00:23:36.135569 containerd[1603]: time="2025-11-05T00:23:36.135497614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" id:\"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" pid:3449 exited_at:{seconds:1762302216 nanos:133580870}" Nov 5 00:23:36.140619 systemd[1]: cri-containerd-b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4.scope: Deactivated successfully. Nov 5 00:23:36.143473 containerd[1603]: time="2025-11-05T00:23:36.143390727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\" id:\"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\" pid:2997 exit_status:137 exited_at:{seconds:1762302216 nanos:142959671}" Nov 5 00:23:36.165056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc-rootfs.mount: Deactivated successfully. Nov 5 00:23:36.182218 containerd[1603]: time="2025-11-05T00:23:36.182143082Z" level=info msg="StopContainer for \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" returns successfully" Nov 5 00:23:36.183515 containerd[1603]: time="2025-11-05T00:23:36.183488230Z" level=info msg="StopPodSandbox for \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\"" Nov 5 00:23:36.183791 containerd[1603]: time="2025-11-05T00:23:36.183749017Z" level=info msg="Container to stop \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:23:36.183791 containerd[1603]: time="2025-11-05T00:23:36.183772377Z" level=info msg="Container to stop \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:23:36.184009 containerd[1603]: time="2025-11-05T00:23:36.183985915Z" level=info msg="Container to stop \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:23:36.184071 containerd[1603]: time="2025-11-05T00:23:36.184008775Z" level=info msg="Container to stop \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:23:36.184071 containerd[1603]: time="2025-11-05T00:23:36.184021125Z" level=info msg="Container to stop \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 00:23:36.195615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4-rootfs.mount: Deactivated successfully. Nov 5 00:23:36.199936 systemd[1]: cri-containerd-5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a.scope: Deactivated successfully. Nov 5 00:23:36.202916 containerd[1603]: time="2025-11-05T00:23:36.202867947Z" level=info msg="received exit event sandbox_id:\"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\" exit_status:137 exited_at:{seconds:1762302216 nanos:142959671}" Nov 5 00:23:36.204219 containerd[1603]: time="2025-11-05T00:23:36.204121056Z" level=info msg="TearDown network for sandbox \"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\" successfully" Nov 5 00:23:36.204219 containerd[1603]: time="2025-11-05T00:23:36.204146387Z" level=info msg="StopPodSandbox for \"b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4\" returns successfully" Nov 5 00:23:36.205788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4-shm.mount: Deactivated successfully. Nov 5 00:23:36.207340 containerd[1603]: time="2025-11-05T00:23:36.207056852Z" level=info msg="shim disconnected" id=b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4 namespace=k8s.io Nov 5 00:23:36.207340 containerd[1603]: time="2025-11-05T00:23:36.207077341Z" level=warning msg="cleaning up after shim disconnected" id=b37496fafc4048971a6e334ec67ee72ea33dd56f0170db5b367d991ab115cca4 namespace=k8s.io Nov 5 00:23:36.207340 containerd[1603]: time="2025-11-05T00:23:36.207122031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 00:23:36.238907 containerd[1603]: time="2025-11-05T00:23:36.238591557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" id:\"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" pid:2947 exit_status:137 exited_at:{seconds:1762302216 nanos:213484357}" Nov 5 00:23:36.251768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a-rootfs.mount: Deactivated successfully. Nov 5 00:23:36.256073 containerd[1603]: time="2025-11-05T00:23:36.256007369Z" level=info msg="received exit event sandbox_id:\"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" exit_status:137 exited_at:{seconds:1762302216 nanos:213484357}" Nov 5 00:23:36.257111 containerd[1603]: time="2025-11-05T00:23:36.256458606Z" level=info msg="shim disconnected" id=5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a namespace=k8s.io Nov 5 00:23:36.257111 containerd[1603]: time="2025-11-05T00:23:36.256593365Z" level=warning msg="cleaning up after shim disconnected" id=5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a namespace=k8s.io Nov 5 00:23:36.257111 containerd[1603]: time="2025-11-05T00:23:36.256608255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 00:23:36.257735 containerd[1603]: time="2025-11-05T00:23:36.257711995Z" level=info msg="TearDown network for sandbox \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" successfully" Nov 5 00:23:36.257797 containerd[1603]: time="2025-11-05T00:23:36.257735045Z" level=info msg="StopPodSandbox for \"5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a\" returns successfully" Nov 5 00:23:36.372364 kubelet[2792]: I1105 00:23:36.372316 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fc3a528-f94c-454d-b240-d94a845ce41f-hubble-tls\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373616 kubelet[2792]: I1105 00:23:36.372466 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-hostproc\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373616 kubelet[2792]: I1105 00:23:36.372494 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-host-proc-sys-kernel\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373616 kubelet[2792]: I1105 00:23:36.372516 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cni-path\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373616 kubelet[2792]: I1105 00:23:36.372532 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-lib-modules\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373616 kubelet[2792]: I1105 00:23:36.372547 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-xtables-lock\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373616 kubelet[2792]: I1105 00:23:36.372782 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-cgroup\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373756 kubelet[2792]: I1105 00:23:36.372803 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knk8f\" (UniqueName: \"kubernetes.io/projected/73d6cb90-6df3-4a48-ba22-e889a2919a11-kube-api-access-knk8f\") pod \"73d6cb90-6df3-4a48-ba22-e889a2919a11\" (UID: \"73d6cb90-6df3-4a48-ba22-e889a2919a11\") " Nov 5 00:23:36.373756 kubelet[2792]: I1105 00:23:36.372823 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-host-proc-sys-net\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373756 kubelet[2792]: I1105 00:23:36.372842 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-config-path\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373756 kubelet[2792]: I1105 00:23:36.372863 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fc3a528-f94c-454d-b240-d94a845ce41f-clustermesh-secrets\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373756 kubelet[2792]: I1105 00:23:36.372879 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-run\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373756 kubelet[2792]: I1105 00:23:36.372900 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbdxj\" (UniqueName: \"kubernetes.io/projected/8fc3a528-f94c-454d-b240-d94a845ce41f-kube-api-access-sbdxj\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373883 kubelet[2792]: I1105 00:23:36.372917 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-bpf-maps\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373883 kubelet[2792]: I1105 00:23:36.372932 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-etc-cni-netd\") pod \"8fc3a528-f94c-454d-b240-d94a845ce41f\" (UID: \"8fc3a528-f94c-454d-b240-d94a845ce41f\") " Nov 5 00:23:36.373883 kubelet[2792]: I1105 00:23:36.372948 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73d6cb90-6df3-4a48-ba22-e889a2919a11-cilium-config-path\") pod \"73d6cb90-6df3-4a48-ba22-e889a2919a11\" (UID: \"73d6cb90-6df3-4a48-ba22-e889a2919a11\") " Nov 5 00:23:36.376202 kubelet[2792]: I1105 00:23:36.375512 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.376801 kubelet[2792]: I1105 00:23:36.376776 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-hostproc" (OuterVolumeSpecName: "hostproc") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.376904 kubelet[2792]: I1105 00:23:36.376841 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.376904 kubelet[2792]: I1105 00:23:36.376868 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cni-path" (OuterVolumeSpecName: "cni-path") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.377551 kubelet[2792]: I1105 00:23:36.377327 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.377551 kubelet[2792]: I1105 00:23:36.377359 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.377551 kubelet[2792]: I1105 00:23:36.377374 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.383285 kubelet[2792]: I1105 00:23:36.383264 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.383395 kubelet[2792]: I1105 00:23:36.383340 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fc3a528-f94c-454d-b240-d94a845ce41f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 00:23:36.383497 kubelet[2792]: I1105 00:23:36.383471 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d6cb90-6df3-4a48-ba22-e889a2919a11-kube-api-access-knk8f" (OuterVolumeSpecName: "kube-api-access-knk8f") pod "73d6cb90-6df3-4a48-ba22-e889a2919a11" (UID: "73d6cb90-6df3-4a48-ba22-e889a2919a11"). InnerVolumeSpecName "kube-api-access-knk8f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 00:23:36.383739 kubelet[2792]: I1105 00:23:36.383718 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc3a528-f94c-454d-b240-d94a845ce41f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 00:23:36.384056 kubelet[2792]: I1105 00:23:36.383988 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 00:23:36.384113 kubelet[2792]: I1105 00:23:36.384008 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.384158 kubelet[2792]: I1105 00:23:36.384023 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 00:23:36.385490 kubelet[2792]: I1105 00:23:36.385431 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d6cb90-6df3-4a48-ba22-e889a2919a11-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "73d6cb90-6df3-4a48-ba22-e889a2919a11" (UID: "73d6cb90-6df3-4a48-ba22-e889a2919a11"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 00:23:36.386394 kubelet[2792]: I1105 00:23:36.386137 2792 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fc3a528-f94c-454d-b240-d94a845ce41f-kube-api-access-sbdxj" (OuterVolumeSpecName: "kube-api-access-sbdxj") pod "8fc3a528-f94c-454d-b240-d94a845ce41f" (UID: "8fc3a528-f94c-454d-b240-d94a845ce41f"). InnerVolumeSpecName "kube-api-access-sbdxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 00:23:36.473406 kubelet[2792]: I1105 00:23:36.473348 2792 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8fc3a528-f94c-454d-b240-d94a845ce41f-hubble-tls\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473406 kubelet[2792]: I1105 00:23:36.473392 2792 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-hostproc\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473406 kubelet[2792]: I1105 00:23:36.473410 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-host-proc-sys-kernel\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473406 kubelet[2792]: I1105 00:23:36.473426 2792 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cni-path\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473406 kubelet[2792]: I1105 00:23:36.473437 2792 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-lib-modules\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473406 kubelet[2792]: I1105 00:23:36.473447 2792 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-xtables-lock\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473758 kubelet[2792]: I1105 00:23:36.473457 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-cgroup\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473758 kubelet[2792]: I1105 00:23:36.473467 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-knk8f\" (UniqueName: \"kubernetes.io/projected/73d6cb90-6df3-4a48-ba22-e889a2919a11-kube-api-access-knk8f\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473758 kubelet[2792]: I1105 00:23:36.473478 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-host-proc-sys-net\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473758 kubelet[2792]: I1105 00:23:36.473487 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-config-path\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473758 kubelet[2792]: I1105 00:23:36.473496 2792 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8fc3a528-f94c-454d-b240-d94a845ce41f-clustermesh-secrets\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473758 kubelet[2792]: I1105 00:23:36.473506 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-cilium-run\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473758 kubelet[2792]: I1105 00:23:36.473516 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbdxj\" (UniqueName: \"kubernetes.io/projected/8fc3a528-f94c-454d-b240-d94a845ce41f-kube-api-access-sbdxj\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473758 kubelet[2792]: I1105 00:23:36.473525 2792 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-bpf-maps\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473950 kubelet[2792]: I1105 00:23:36.473533 2792 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8fc3a528-f94c-454d-b240-d94a845ce41f-etc-cni-netd\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.473950 kubelet[2792]: I1105 00:23:36.473543 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73d6cb90-6df3-4a48-ba22-e889a2919a11-cilium-config-path\") on node \"172-234-219-54\" DevicePath \"\"" Nov 5 00:23:36.525963 kubelet[2792]: I1105 00:23:36.525701 2792 scope.go:117] "RemoveContainer" containerID="fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8" Nov 5 00:23:36.533542 containerd[1603]: time="2025-11-05T00:23:36.533367255Z" level=info msg="RemoveContainer for \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\"" Nov 5 00:23:36.537616 systemd[1]: Removed slice kubepods-besteffort-pod73d6cb90_6df3_4a48_ba22_e889a2919a11.slice - libcontainer container kubepods-besteffort-pod73d6cb90_6df3_4a48_ba22_e889a2919a11.slice. Nov 5 00:23:36.542157 containerd[1603]: time="2025-11-05T00:23:36.541994172Z" level=info msg="RemoveContainer for \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" returns successfully" Nov 5 00:23:36.542442 kubelet[2792]: I1105 00:23:36.542424 2792 scope.go:117] "RemoveContainer" containerID="fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8" Nov 5 00:23:36.542990 containerd[1603]: time="2025-11-05T00:23:36.542732996Z" level=error msg="ContainerStatus for \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\": not found" Nov 5 00:23:36.544244 kubelet[2792]: E1105 00:23:36.543860 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\": not found" containerID="fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8" Nov 5 00:23:36.544244 kubelet[2792]: I1105 00:23:36.543960 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8"} err="failed to get container status \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa9aa9de63c0deabd75364b39c16092459434186963f925e09480b02498106d8\": not found" Nov 5 00:23:36.544244 kubelet[2792]: I1105 00:23:36.544034 2792 scope.go:117] "RemoveContainer" containerID="f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc" Nov 5 00:23:36.549153 containerd[1603]: time="2025-11-05T00:23:36.549105633Z" level=info msg="RemoveContainer for \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\"" Nov 5 00:23:36.554598 systemd[1]: Removed slice kubepods-burstable-pod8fc3a528_f94c_454d_b240_d94a845ce41f.slice - libcontainer container kubepods-burstable-pod8fc3a528_f94c_454d_b240_d94a845ce41f.slice. Nov 5 00:23:36.554694 systemd[1]: kubepods-burstable-pod8fc3a528_f94c_454d_b240_d94a845ce41f.slice: Consumed 7.259s CPU time, 124.8M memory peak, 136K read from disk, 13.3M written to disk. Nov 5 00:23:36.570235 containerd[1603]: time="2025-11-05T00:23:36.570057176Z" level=info msg="RemoveContainer for \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" returns successfully" Nov 5 00:23:36.570506 kubelet[2792]: I1105 00:23:36.570458 2792 scope.go:117] "RemoveContainer" containerID="5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699" Nov 5 00:23:36.573502 containerd[1603]: time="2025-11-05T00:23:36.573458237Z" level=info msg="RemoveContainer for \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\"" Nov 5 00:23:36.581094 containerd[1603]: time="2025-11-05T00:23:36.581053694Z" level=info msg="RemoveContainer for \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\" returns successfully" Nov 5 00:23:36.581294 kubelet[2792]: I1105 00:23:36.581268 2792 scope.go:117] "RemoveContainer" containerID="92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582" Nov 5 00:23:36.583672 containerd[1603]: time="2025-11-05T00:23:36.583639911Z" level=info msg="RemoveContainer for \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\"" Nov 5 00:23:36.586882 containerd[1603]: time="2025-11-05T00:23:36.586847425Z" level=info msg="RemoveContainer for \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\" returns successfully" Nov 5 00:23:36.587058 kubelet[2792]: I1105 00:23:36.586991 2792 scope.go:117] "RemoveContainer" containerID="261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726" Nov 5 00:23:36.588669 containerd[1603]: time="2025-11-05T00:23:36.588631480Z" level=info msg="RemoveContainer for \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\"" Nov 5 00:23:36.591367 containerd[1603]: time="2025-11-05T00:23:36.591334897Z" level=info msg="RemoveContainer for \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\" returns successfully" Nov 5 00:23:36.591527 kubelet[2792]: I1105 00:23:36.591473 2792 scope.go:117] "RemoveContainer" containerID="1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da" Nov 5 00:23:36.592856 containerd[1603]: time="2025-11-05T00:23:36.592827535Z" level=info msg="RemoveContainer for \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\"" Nov 5 00:23:36.595280 containerd[1603]: time="2025-11-05T00:23:36.595245944Z" level=info msg="RemoveContainer for \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\" returns successfully" Nov 5 00:23:36.595415 kubelet[2792]: I1105 00:23:36.595381 2792 scope.go:117] "RemoveContainer" containerID="f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc" Nov 5 00:23:36.595689 containerd[1603]: time="2025-11-05T00:23:36.595583411Z" level=error msg="ContainerStatus for \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\": not found" Nov 5 00:23:36.595899 kubelet[2792]: E1105 00:23:36.595828 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\": not found" containerID="f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc" Nov 5 00:23:36.595899 kubelet[2792]: I1105 00:23:36.595863 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc"} err="failed to get container status \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8c79e0101d57e62f6894ae629093c68751660e67d91cb54eb6076cb9dd0efbc\": not found" Nov 5 00:23:36.595899 kubelet[2792]: I1105 00:23:36.595884 2792 scope.go:117] "RemoveContainer" containerID="5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699" Nov 5 00:23:36.596112 containerd[1603]: time="2025-11-05T00:23:36.596056597Z" level=error msg="ContainerStatus for \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\": not found" Nov 5 00:23:36.596226 kubelet[2792]: E1105 00:23:36.596140 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\": not found" containerID="5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699" Nov 5 00:23:36.596226 kubelet[2792]: I1105 00:23:36.596161 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699"} err="failed to get container status \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f4bbddde75fca01988362afdd98122c5230d3f0326e7da5ecdc663dc9efc699\": not found" Nov 5 00:23:36.596226 kubelet[2792]: I1105 00:23:36.596175 2792 scope.go:117] "RemoveContainer" containerID="92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582" Nov 5 00:23:36.596375 containerd[1603]: time="2025-11-05T00:23:36.596344354Z" level=error msg="ContainerStatus for \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\": not found" Nov 5 00:23:36.596527 kubelet[2792]: E1105 00:23:36.596503 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\": not found" containerID="92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582" Nov 5 00:23:36.596573 kubelet[2792]: I1105 00:23:36.596526 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582"} err="failed to get container status \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\": rpc error: code = NotFound desc = an error occurred when try to find container \"92af09ec23d1e0c5d574d02dab1670dc0efbdb0be04e425d53ef44ef45b42582\": not found" Nov 5 00:23:36.596573 kubelet[2792]: I1105 00:23:36.596540 2792 scope.go:117] "RemoveContainer" containerID="261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726" Nov 5 00:23:36.596709 containerd[1603]: time="2025-11-05T00:23:36.596659263Z" level=error msg="ContainerStatus for \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\": not found" Nov 5 00:23:36.596820 kubelet[2792]: E1105 00:23:36.596767 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\": not found" containerID="261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726" Nov 5 00:23:36.596820 kubelet[2792]: I1105 00:23:36.596785 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726"} err="failed to get container status \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\": rpc error: code = NotFound desc = an error occurred when try to find container \"261a1c64781e975f61c5f83ac2a73ccd1655edb22c11747cbf6c906780856726\": not found" Nov 5 00:23:36.596820 kubelet[2792]: I1105 00:23:36.596799 2792 scope.go:117] "RemoveContainer" containerID="1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da" Nov 5 00:23:36.597177 containerd[1603]: time="2025-11-05T00:23:36.597031719Z" level=error msg="ContainerStatus for \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\": not found" Nov 5 00:23:36.597359 kubelet[2792]: E1105 00:23:36.597334 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\": not found" containerID="1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da" Nov 5 00:23:36.597524 kubelet[2792]: I1105 00:23:36.597466 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da"} err="failed to get container status \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\": rpc error: code = NotFound desc = an error occurred when try to find container \"1123f2604d0e4a702ad1c0851bd5236b782ff4d3f9253cf9acc3fd699e1dc4da\": not found" Nov 5 00:23:37.100157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e7744fbbc8b82a7f25069f79fe3850e2dd49ad0e587b4daa3e2c4d1ef2cfd2a-shm.mount: Deactivated successfully. Nov 5 00:23:37.100953 systemd[1]: var-lib-kubelet-pods-73d6cb90\x2d6df3\x2d4a48\x2dba22\x2de889a2919a11-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dknk8f.mount: Deactivated successfully. Nov 5 00:23:37.101077 systemd[1]: var-lib-kubelet-pods-8fc3a528\x2df94c\x2d454d\x2db240\x2dd94a845ce41f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 5 00:23:37.101168 systemd[1]: var-lib-kubelet-pods-8fc3a528\x2df94c\x2d454d\x2db240\x2dd94a845ce41f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsbdxj.mount: Deactivated successfully. Nov 5 00:23:37.101290 systemd[1]: var-lib-kubelet-pods-8fc3a528\x2df94c\x2d454d\x2db240\x2dd94a845ce41f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 5 00:23:37.999218 kubelet[2792]: I1105 00:23:37.998519 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d6cb90-6df3-4a48-ba22-e889a2919a11" path="/var/lib/kubelet/pods/73d6cb90-6df3-4a48-ba22-e889a2919a11/volumes" Nov 5 00:23:37.999973 kubelet[2792]: I1105 00:23:37.999953 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fc3a528-f94c-454d-b240-d94a845ce41f" path="/var/lib/kubelet/pods/8fc3a528-f94c-454d-b240-d94a845ce41f/volumes" Nov 5 00:23:38.026003 sshd[4391]: Connection closed by 139.178.68.195 port 49174 Nov 5 00:23:38.027024 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:38.033053 systemd[1]: sshd@24-172.234.219.54:22-139.178.68.195:49174.service: Deactivated successfully. Nov 5 00:23:38.035776 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 00:23:38.039740 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Nov 5 00:23:38.041779 systemd-logind[1575]: Removed session 25. Nov 5 00:23:38.090938 systemd[1]: Started sshd@25-172.234.219.54:22-139.178.68.195:49178.service - OpenSSH per-connection server daemon (139.178.68.195:49178). Nov 5 00:23:38.454869 sshd[4543]: Accepted publickey for core from 139.178.68.195 port 49178 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:38.456979 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:38.466489 systemd-logind[1575]: New session 26 of user core. Nov 5 00:23:38.469331 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 00:23:39.045346 systemd[1]: Created slice kubepods-burstable-pod7cc8f623_c9ce_46e5_9612_2f83a2945487.slice - libcontainer container kubepods-burstable-pod7cc8f623_c9ce_46e5_9612_2f83a2945487.slice. Nov 5 00:23:39.052205 sshd[4547]: Connection closed by 139.178.68.195 port 49178 Nov 5 00:23:39.052679 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:39.059018 systemd[1]: sshd@25-172.234.219.54:22-139.178.68.195:49178.service: Deactivated successfully. Nov 5 00:23:39.063652 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 00:23:39.071352 systemd-logind[1575]: Session 26 logged out. Waiting for processes to exit. Nov 5 00:23:39.074555 systemd-logind[1575]: Removed session 26. Nov 5 00:23:39.103353 kubelet[2792]: E1105 00:23:39.103304 2792 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 5 00:23:39.121285 systemd[1]: Started sshd@26-172.234.219.54:22-139.178.68.195:49190.service - OpenSSH per-connection server daemon (139.178.68.195:49190). Nov 5 00:23:39.192583 kubelet[2792]: I1105 00:23:39.192518 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77kl2\" (UniqueName: \"kubernetes.io/projected/7cc8f623-c9ce-46e5-9612-2f83a2945487-kube-api-access-77kl2\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192583 kubelet[2792]: I1105 00:23:39.192565 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-cilium-cgroup\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192583 kubelet[2792]: I1105 00:23:39.192589 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cc8f623-c9ce-46e5-9612-2f83a2945487-cilium-config-path\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192788 kubelet[2792]: I1105 00:23:39.192603 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-host-proc-sys-net\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192788 kubelet[2792]: I1105 00:23:39.192633 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cc8f623-c9ce-46e5-9612-2f83a2945487-hubble-tls\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192788 kubelet[2792]: I1105 00:23:39.192654 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-cni-path\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192788 kubelet[2792]: I1105 00:23:39.192668 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-cilium-run\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192788 kubelet[2792]: I1105 00:23:39.192681 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-hostproc\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192788 kubelet[2792]: I1105 00:23:39.192694 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-etc-cni-netd\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192956 kubelet[2792]: I1105 00:23:39.192710 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-bpf-maps\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192956 kubelet[2792]: I1105 00:23:39.192724 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cc8f623-c9ce-46e5-9612-2f83a2945487-cilium-ipsec-secrets\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192956 kubelet[2792]: I1105 00:23:39.192737 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-lib-modules\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192956 kubelet[2792]: I1105 00:23:39.192749 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-xtables-lock\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192956 kubelet[2792]: I1105 00:23:39.192763 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cc8f623-c9ce-46e5-9612-2f83a2945487-clustermesh-secrets\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.192956 kubelet[2792]: I1105 00:23:39.192776 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cc8f623-c9ce-46e5-9612-2f83a2945487-host-proc-sys-kernel\") pod \"cilium-x4znq\" (UID: \"7cc8f623-c9ce-46e5-9612-2f83a2945487\") " pod="kube-system/cilium-x4znq" Nov 5 00:23:39.348736 kubelet[2792]: E1105 00:23:39.348691 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:39.349840 containerd[1603]: time="2025-11-05T00:23:39.349380044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x4znq,Uid:7cc8f623-c9ce-46e5-9612-2f83a2945487,Namespace:kube-system,Attempt:0,}" Nov 5 00:23:39.371136 containerd[1603]: time="2025-11-05T00:23:39.371103767Z" level=info msg="connecting to shim 5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8" address="unix:///run/containerd/s/3fc845661b3c03e1afd423b8886e7b62202170c2b8c1a4706e84970e1706ffc7" namespace=k8s.io protocol=ttrpc version=3 Nov 5 00:23:39.410331 systemd[1]: Started cri-containerd-5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8.scope - libcontainer container 5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8. Nov 5 00:23:39.444818 containerd[1603]: time="2025-11-05T00:23:39.444786296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x4znq,Uid:7cc8f623-c9ce-46e5-9612-2f83a2945487,Namespace:kube-system,Attempt:0,} returns sandbox id \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\"" Nov 5 00:23:39.445909 kubelet[2792]: E1105 00:23:39.445863 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:39.451210 containerd[1603]: time="2025-11-05T00:23:39.450704198Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 00:23:39.457499 containerd[1603]: time="2025-11-05T00:23:39.457468533Z" level=info msg="Container 74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:23:39.462257 containerd[1603]: time="2025-11-05T00:23:39.462167404Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2\"" Nov 5 00:23:39.463095 containerd[1603]: time="2025-11-05T00:23:39.463076377Z" level=info msg="StartContainer for \"74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2\"" Nov 5 00:23:39.464115 containerd[1603]: time="2025-11-05T00:23:39.464095458Z" level=info msg="connecting to shim 74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2" address="unix:///run/containerd/s/3fc845661b3c03e1afd423b8886e7b62202170c2b8c1a4706e84970e1706ffc7" protocol=ttrpc version=3 Nov 5 00:23:39.481953 sshd[4558]: Accepted publickey for core from 139.178.68.195 port 49190 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:39.483249 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:39.484385 systemd[1]: Started cri-containerd-74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2.scope - libcontainer container 74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2. Nov 5 00:23:39.494829 systemd-logind[1575]: New session 27 of user core. Nov 5 00:23:39.499366 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 00:23:39.537062 containerd[1603]: time="2025-11-05T00:23:39.536748916Z" level=info msg="StartContainer for \"74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2\" returns successfully" Nov 5 00:23:39.552392 systemd[1]: cri-containerd-74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2.scope: Deactivated successfully. Nov 5 00:23:39.554039 kubelet[2792]: E1105 00:23:39.554018 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:39.559811 containerd[1603]: time="2025-11-05T00:23:39.559755568Z" level=info msg="received exit event container_id:\"74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2\" id:\"74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2\" pid:4622 exited_at:{seconds:1762302219 nanos:555444283}" Nov 5 00:23:39.560709 containerd[1603]: time="2025-11-05T00:23:39.560681651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2\" id:\"74868ecfae1020a07079add404f317a0b48503613c2ae409e2b8ef4031c0fec2\" pid:4622 exited_at:{seconds:1762302219 nanos:555444283}" Nov 5 00:23:39.731114 sshd[4629]: Connection closed by 139.178.68.195 port 49190 Nov 5 00:23:39.732088 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:39.740846 systemd[1]: sshd@26-172.234.219.54:22-139.178.68.195:49190.service: Deactivated successfully. Nov 5 00:23:39.743730 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 00:23:39.746426 systemd-logind[1575]: Session 27 logged out. Waiting for processes to exit. Nov 5 00:23:39.747664 systemd-logind[1575]: Removed session 27. Nov 5 00:23:39.799094 systemd[1]: Started sshd@27-172.234.219.54:22-139.178.68.195:49194.service - OpenSSH per-connection server daemon (139.178.68.195:49194). Nov 5 00:23:40.162772 sshd[4664]: Accepted publickey for core from 139.178.68.195 port 49194 ssh2: RSA SHA256:JT0MJavnH1qRWXM4G4M2ffpAftuwyoL2j6X7xKn15ZA Nov 5 00:23:40.164538 sshd-session[4664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 00:23:40.170449 systemd-logind[1575]: New session 28 of user core. Nov 5 00:23:40.175443 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 5 00:23:40.561101 kubelet[2792]: E1105 00:23:40.560716 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:40.569743 containerd[1603]: time="2025-11-05T00:23:40.569692318Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 00:23:40.590778 containerd[1603]: time="2025-11-05T00:23:40.590726997Z" level=info msg="Container 0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:23:40.591297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3166798619.mount: Deactivated successfully. Nov 5 00:23:40.601692 containerd[1603]: time="2025-11-05T00:23:40.601654029Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad\"" Nov 5 00:23:40.602929 containerd[1603]: time="2025-11-05T00:23:40.602885489Z" level=info msg="StartContainer for \"0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad\"" Nov 5 00:23:40.605478 containerd[1603]: time="2025-11-05T00:23:40.605386719Z" level=info msg="connecting to shim 0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad" address="unix:///run/containerd/s/3fc845661b3c03e1afd423b8886e7b62202170c2b8c1a4706e84970e1706ffc7" protocol=ttrpc version=3 Nov 5 00:23:40.648327 systemd[1]: Started cri-containerd-0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad.scope - libcontainer container 0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad. Nov 5 00:23:40.699445 containerd[1603]: time="2025-11-05T00:23:40.699383001Z" level=info msg="StartContainer for \"0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad\" returns successfully" Nov 5 00:23:40.710931 systemd[1]: cri-containerd-0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad.scope: Deactivated successfully. Nov 5 00:23:40.713667 containerd[1603]: time="2025-11-05T00:23:40.713432607Z" level=info msg="received exit event container_id:\"0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad\" id:\"0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad\" pid:4688 exited_at:{seconds:1762302220 nanos:712077298}" Nov 5 00:23:40.713667 containerd[1603]: time="2025-11-05T00:23:40.713545276Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad\" id:\"0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad\" pid:4688 exited_at:{seconds:1762302220 nanos:712077298}" Nov 5 00:23:41.304448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d0f7b186e5f454dd52801489ff561c66746249c7a0a58468a1a076b6c2d18ad-rootfs.mount: Deactivated successfully. Nov 5 00:23:41.565228 kubelet[2792]: E1105 00:23:41.564449 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:41.571104 containerd[1603]: time="2025-11-05T00:23:41.571064579Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 00:23:41.589400 containerd[1603]: time="2025-11-05T00:23:41.589275833Z" level=info msg="Container bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:23:41.593524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521319975.mount: Deactivated successfully. Nov 5 00:23:41.602263 containerd[1603]: time="2025-11-05T00:23:41.602207041Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110\"" Nov 5 00:23:41.602964 containerd[1603]: time="2025-11-05T00:23:41.602932204Z" level=info msg="StartContainer for \"bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110\"" Nov 5 00:23:41.604026 containerd[1603]: time="2025-11-05T00:23:41.604006376Z" level=info msg="connecting to shim bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110" address="unix:///run/containerd/s/3fc845661b3c03e1afd423b8886e7b62202170c2b8c1a4706e84970e1706ffc7" protocol=ttrpc version=3 Nov 5 00:23:41.632428 systemd[1]: Started cri-containerd-bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110.scope - libcontainer container bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110. Nov 5 00:23:41.685429 containerd[1603]: time="2025-11-05T00:23:41.685332896Z" level=info msg="StartContainer for \"bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110\" returns successfully" Nov 5 00:23:41.690060 systemd[1]: cri-containerd-bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110.scope: Deactivated successfully. Nov 5 00:23:41.691577 containerd[1603]: time="2025-11-05T00:23:41.691455437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110\" id:\"bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110\" pid:4733 exited_at:{seconds:1762302221 nanos:689798220}" Nov 5 00:23:41.691803 containerd[1603]: time="2025-11-05T00:23:41.691589806Z" level=info msg="received exit event container_id:\"bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110\" id:\"bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110\" pid:4733 exited_at:{seconds:1762302221 nanos:689798220}" Nov 5 00:23:41.729219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf2c0b6340ff035d2a4498d0509019c4a6988d04409f535b09ad2fc4ca513110-rootfs.mount: Deactivated successfully. Nov 5 00:23:42.574310 kubelet[2792]: E1105 00:23:42.573902 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:42.581421 containerd[1603]: time="2025-11-05T00:23:42.581355663Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 00:23:42.594565 containerd[1603]: time="2025-11-05T00:23:42.592228606Z" level=info msg="Container d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:23:42.602233 containerd[1603]: time="2025-11-05T00:23:42.602159298Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb\"" Nov 5 00:23:42.603531 containerd[1603]: time="2025-11-05T00:23:42.603511957Z" level=info msg="StartContainer for \"d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb\"" Nov 5 00:23:42.605460 containerd[1603]: time="2025-11-05T00:23:42.605438712Z" level=info msg="connecting to shim d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb" address="unix:///run/containerd/s/3fc845661b3c03e1afd423b8886e7b62202170c2b8c1a4706e84970e1706ffc7" protocol=ttrpc version=3 Nov 5 00:23:42.639355 systemd[1]: Started cri-containerd-d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb.scope - libcontainer container d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb. Nov 5 00:23:42.679767 systemd[1]: cri-containerd-d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb.scope: Deactivated successfully. Nov 5 00:23:42.681637 containerd[1603]: time="2025-11-05T00:23:42.681572140Z" level=info msg="received exit event container_id:\"d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb\" id:\"d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb\" pid:4773 exited_at:{seconds:1762302222 nanos:681330902}" Nov 5 00:23:42.681869 containerd[1603]: time="2025-11-05T00:23:42.681796858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb\" id:\"d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb\" pid:4773 exited_at:{seconds:1762302222 nanos:681330902}" Nov 5 00:23:42.682126 containerd[1603]: time="2025-11-05T00:23:42.682082165Z" level=info msg="StartContainer for \"d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb\" returns successfully" Nov 5 00:23:42.716060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1b16046b83067cb5693ba36298ea9901a213d5e989ad56ebb7108fc153a78cb-rootfs.mount: Deactivated successfully. Nov 5 00:23:43.580798 kubelet[2792]: E1105 00:23:43.580605 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:43.587522 containerd[1603]: time="2025-11-05T00:23:43.587467331Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 00:23:43.603426 containerd[1603]: time="2025-11-05T00:23:43.603384097Z" level=info msg="Container 5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550: CDI devices from CRI Config.CDIDevices: []" Nov 5 00:23:43.616755 containerd[1603]: time="2025-11-05T00:23:43.616705352Z" level=info msg="CreateContainer within sandbox \"5873f21f5b5be72dc5b5ab4dcf561e29ae75870fcfa58c852acff3711d98c4c8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\"" Nov 5 00:23:43.618286 containerd[1603]: time="2025-11-05T00:23:43.618252780Z" level=info msg="StartContainer for \"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\"" Nov 5 00:23:43.619207 containerd[1603]: time="2025-11-05T00:23:43.619153682Z" level=info msg="connecting to shim 5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550" address="unix:///run/containerd/s/3fc845661b3c03e1afd423b8886e7b62202170c2b8c1a4706e84970e1706ffc7" protocol=ttrpc version=3 Nov 5 00:23:43.659360 systemd[1]: Started cri-containerd-5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550.scope - libcontainer container 5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550. Nov 5 00:23:43.719682 containerd[1603]: time="2025-11-05T00:23:43.719615497Z" level=info msg="StartContainer for \"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\" returns successfully" Nov 5 00:23:43.813146 containerd[1603]: time="2025-11-05T00:23:43.813090954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\" id:\"2ba1dfc3962e7980ccf9794b7a3b0e41bfae020f4c54df0fb55d055ae5da18ab\" pid:4841 exited_at:{seconds:1762302223 nanos:812500349}" Nov 5 00:23:44.299287 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Nov 5 00:23:44.586300 kubelet[2792]: E1105 00:23:44.586276 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:45.588716 kubelet[2792]: E1105 00:23:45.588664 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:46.653716 containerd[1603]: time="2025-11-05T00:23:46.653630381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\" id:\"2fdfd2cf83de48ff7b513da334ea9388834381799f91bd5f5f9f89a251082ba0\" pid:5146 exit_status:1 exited_at:{seconds:1762302226 nanos:653342793}" Nov 5 00:23:47.349490 systemd-networkd[1503]: lxc_health: Link UP Nov 5 00:23:47.352598 kubelet[2792]: E1105 00:23:47.352544 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:47.353016 systemd-networkd[1503]: lxc_health: Gained carrier Nov 5 00:23:47.385376 kubelet[2792]: I1105 00:23:47.385320 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x4znq" podStartSLOduration=8.385304 podStartE2EDuration="8.385304s" podCreationTimestamp="2025-11-05 00:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 00:23:44.600638565 +0000 UTC m=+220.761723895" watchObservedRunningTime="2025-11-05 00:23:47.385304 +0000 UTC m=+223.546389330" Nov 5 00:23:47.593373 kubelet[2792]: E1105 00:23:47.593169 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:48.597214 kubelet[2792]: E1105 00:23:48.596721 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:48.906466 containerd[1603]: time="2025-11-05T00:23:48.905737910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\" id:\"1ac33321e2fc523035a92b238a5f4f45f592c2f8cbf5af46c861210810f73456\" pid:5384 exited_at:{seconds:1762302228 nanos:904464189}" Nov 5 00:23:49.098510 systemd-networkd[1503]: lxc_health: Gained IPv6LL Nov 5 00:23:51.112724 containerd[1603]: time="2025-11-05T00:23:51.112674883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\" id:\"f2d52d4ce89230993b808fad6276e28016a36531b6e42ad82b9f9682fd7e2b2b\" pid:5423 exited_at:{seconds:1762302231 nanos:112201226}" Nov 5 00:23:53.596093 containerd[1603]: time="2025-11-05T00:23:53.596012321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\" id:\"d6168d68f6cef6e0553f4614de5310e4f94f857a0d68f1200c3eaed11b33f1f6\" pid:5456 exited_at:{seconds:1762302233 nanos:595328916}" Nov 5 00:23:55.878629 containerd[1603]: time="2025-11-05T00:23:55.878339082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dbcb9e1388f31260dbb15bedca7489b1fcbae92c3efcf4ee73a990a056b3550\" id:\"359c5eaeadeadac6b042e1f07da74648addc173af2003a99eafec9359c5b4ff3\" pid:5479 exited_at:{seconds:1762302235 nanos:877992854}" Nov 5 00:23:56.000112 sshd[4667]: Connection closed by 139.178.68.195 port 49194 Nov 5 00:23:56.000830 sshd-session[4664]: pam_unix(sshd:session): session closed for user core Nov 5 00:23:56.006803 systemd[1]: sshd@27-172.234.219.54:22-139.178.68.195:49194.service: Deactivated successfully. Nov 5 00:23:56.010004 systemd[1]: session-28.scope: Deactivated successfully. Nov 5 00:23:56.011388 systemd-logind[1575]: Session 28 logged out. Waiting for processes to exit. Nov 5 00:23:56.013718 systemd-logind[1575]: Removed session 28. Nov 5 00:23:56.988725 kubelet[2792]: E1105 00:23:56.988346 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19" Nov 5 00:23:57.989729 kubelet[2792]: E1105 00:23:57.988612 2792 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.22 172.232.0.9 172.232.0.19"