Nov 8 00:29:33.951742 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:29:33.951779 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:33.951799 kernel: BIOS-provided physical RAM map: Nov 8 00:29:33.951811 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:29:33.951822 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 8 00:29:33.951834 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Nov 8 00:29:33.951848 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Nov 8 00:29:33.951860 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 8 00:29:33.951872 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 8 00:29:33.951888 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 8 00:29:33.951900 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 8 00:29:33.951912 kernel: NX (Execute Disable) protection: active Nov 8 00:29:33.951924 kernel: APIC: Static calls initialized Nov 8 00:29:33.951937 kernel: efi: EFI v2.7 by EDK II Nov 8 00:29:33.951951 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 8 00:29:33.951969 kernel: SMBIOS 2.7 present. Nov 8 00:29:33.951984 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 8 00:29:33.951997 kernel: Hypervisor detected: KVM Nov 8 00:29:33.952009 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:29:33.952022 kernel: kvm-clock: using sched offset of 3787623397 cycles Nov 8 00:29:33.952035 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:29:33.952047 kernel: tsc: Detected 2499.998 MHz processor Nov 8 00:29:33.952060 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:29:33.952088 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:29:33.952103 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 8 00:29:33.953192 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:29:33.953208 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:29:33.953221 kernel: Using GB pages for direct mapping Nov 8 00:29:33.953234 kernel: Secure boot disabled Nov 8 00:29:33.953246 kernel: ACPI: Early table checksum verification disabled Nov 8 00:29:33.953259 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 8 00:29:33.953273 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 8 00:29:33.953285 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 8 00:29:33.953297 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 8 00:29:33.953316 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 8 00:29:33.953329 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 8 00:29:33.953342 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 8 00:29:33.953355 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 8 00:29:33.953369 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 8 00:29:33.953383 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 8 00:29:33.953402 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:29:33.953419 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 8 00:29:33.953432 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 8 00:29:33.953447 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 8 00:29:33.953461 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 8 00:29:33.953475 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 8 00:29:33.953490 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 8 00:29:33.953505 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 8 00:29:33.953522 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 8 00:29:33.953536 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 8 00:29:33.953551 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 8 00:29:33.953566 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 8 00:29:33.953581 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 8 00:29:33.953596 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 8 00:29:33.953611 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:29:33.953626 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:29:33.953641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 8 00:29:33.953660 kernel: NUMA: Initialized distance table, cnt=1 Nov 8 00:29:33.953674 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Nov 8 00:29:33.953689 kernel: Zone ranges: Nov 8 00:29:33.953704 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:29:33.953718 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 8 00:29:33.953733 kernel: Normal empty Nov 8 00:29:33.953751 kernel: Movable zone start for each node Nov 8 00:29:33.953773 kernel: Early memory node ranges Nov 8 00:29:33.953789 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:29:33.953808 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 8 00:29:33.953822 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 8 00:29:33.953836 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 8 00:29:33.953851 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:29:33.953866 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:29:33.953882 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 8 00:29:33.953899 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 8 00:29:33.953914 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 8 00:29:33.953929 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:29:33.953943 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 8 00:29:33.953962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:29:33.953978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:29:33.953994 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:29:33.954008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:29:33.954023 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:29:33.954037 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:29:33.954052 kernel: TSC deadline timer available Nov 8 00:29:33.954068 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:29:33.954122 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:29:33.954141 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 8 00:29:33.954156 kernel: Booting paravirtualized kernel on KVM Nov 8 00:29:33.954171 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:29:33.954186 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:29:33.954200 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:29:33.954215 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:29:33.954229 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:29:33.954242 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:29:33.954256 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:29:33.954276 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:33.954289 kernel: random: crng init done Nov 8 00:29:33.954303 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:29:33.954316 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:29:33.954340 kernel: Fallback order for Node 0: 0 Nov 8 00:29:33.954354 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Nov 8 00:29:33.954365 kernel: Policy zone: DMA32 Nov 8 00:29:33.954381 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:29:33.954399 kernel: Memory: 1874604K/2037804K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 162940K reserved, 0K cma-reserved) Nov 8 00:29:33.954416 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:29:33.954431 kernel: Kernel/User page tables isolation: enabled Nov 8 00:29:33.954445 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:29:33.954459 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:29:33.954474 kernel: Dynamic Preempt: voluntary Nov 8 00:29:33.954488 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:29:33.954509 kernel: rcu: RCU event tracing is enabled. Nov 8 00:29:33.954524 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:29:33.954542 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:29:33.954558 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:29:33.954574 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:29:33.954589 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:29:33.954605 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:29:33.954621 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:29:33.954637 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:29:33.954667 kernel: Console: colour dummy device 80x25 Nov 8 00:29:33.954684 kernel: printk: console [tty0] enabled Nov 8 00:29:33.954699 kernel: printk: console [ttyS0] enabled Nov 8 00:29:33.954716 kernel: ACPI: Core revision 20230628 Nov 8 00:29:33.954733 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 8 00:29:33.954753 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:29:33.954767 kernel: x2apic enabled Nov 8 00:29:33.954784 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:29:33.954801 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:29:33.954818 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 8 00:29:33.954837 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:29:33.954853 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:29:33.954869 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:29:33.954884 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:29:33.954900 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:29:33.954916 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:29:33.954933 kernel: RETBleed: Vulnerable Nov 8 00:29:33.954950 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:29:33.954967 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:29:33.954985 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:29:33.955004 kernel: GDS: Unknown: Dependent on hypervisor status Nov 8 00:29:33.955020 kernel: active return thunk: its_return_thunk Nov 8 00:29:33.955035 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:29:33.955051 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:29:33.955068 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:29:33.958743 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:29:33.958766 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 00:29:33.958783 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 00:29:33.958800 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:29:33.958816 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:29:33.958832 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:29:33.958855 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:29:33.958871 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:29:33.958886 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 00:29:33.958902 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 00:29:33.958918 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 8 00:29:33.958935 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 8 00:29:33.958951 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 8 00:29:33.958967 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 8 00:29:33.958984 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 8 00:29:33.959001 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:29:33.959017 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:29:33.959037 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:29:33.959053 kernel: landlock: Up and running. Nov 8 00:29:33.959069 kernel: SELinux: Initializing. Nov 8 00:29:33.959100 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:29:33.959116 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:29:33.959133 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:29:33.959149 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:33.959166 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:33.959183 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:33.959200 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:29:33.959220 kernel: signal: max sigframe size: 3632 Nov 8 00:29:33.959237 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:29:33.959255 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:29:33.959272 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:29:33.959288 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:29:33.959304 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:29:33.959320 kernel: .... node #0, CPUs: #1 Nov 8 00:29:33.959338 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 8 00:29:33.959355 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:29:33.959375 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:29:33.959391 kernel: smpboot: Max logical packages: 1 Nov 8 00:29:33.959408 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 8 00:29:33.959424 kernel: devtmpfs: initialized Nov 8 00:29:33.959440 kernel: x86/mm: Memory block size: 128MB Nov 8 00:29:33.959457 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 8 00:29:33.959473 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:29:33.959490 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:29:33.959506 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:29:33.959525 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:29:33.959542 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:29:33.959559 kernel: audit: type=2000 audit(1762561773.013:1): state=initialized audit_enabled=0 res=1 Nov 8 00:29:33.959575 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:29:33.959592 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:29:33.959608 kernel: cpuidle: using governor menu Nov 8 00:29:33.959625 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:29:33.959641 kernel: dca service started, version 1.12.1 Nov 8 00:29:33.959658 kernel: PCI: Using configuration type 1 for base access Nov 8 00:29:33.959678 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:29:33.959694 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:29:33.959711 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:29:33.959728 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:29:33.959745 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:29:33.959761 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:29:33.959778 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:29:33.959794 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:29:33.959811 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 8 00:29:33.959831 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:29:33.959847 kernel: ACPI: Interpreter enabled Nov 8 00:29:33.959863 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:29:33.959880 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:29:33.959896 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:29:33.959912 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:29:33.959928 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 8 00:29:33.959945 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:29:33.960197 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:29:33.960353 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:29:33.960504 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:29:33.960524 kernel: acpiphp: Slot [3] registered Nov 8 00:29:33.960541 kernel: acpiphp: Slot [4] registered Nov 8 00:29:33.960557 kernel: acpiphp: Slot [5] registered Nov 8 00:29:33.960574 kernel: acpiphp: Slot [6] registered Nov 8 00:29:33.960590 kernel: acpiphp: Slot [7] registered Nov 8 00:29:33.960610 kernel: acpiphp: Slot [8] registered Nov 8 00:29:33.960627 kernel: acpiphp: Slot [9] registered Nov 8 00:29:33.960644 kernel: acpiphp: Slot [10] registered Nov 8 00:29:33.960660 kernel: acpiphp: Slot [11] registered Nov 8 00:29:33.960677 kernel: acpiphp: Slot [12] registered Nov 8 00:29:33.960693 kernel: acpiphp: Slot [13] registered Nov 8 00:29:33.960710 kernel: acpiphp: Slot [14] registered Nov 8 00:29:33.960726 kernel: acpiphp: Slot [15] registered Nov 8 00:29:33.960742 kernel: acpiphp: Slot [16] registered Nov 8 00:29:33.960759 kernel: acpiphp: Slot [17] registered Nov 8 00:29:33.960778 kernel: acpiphp: Slot [18] registered Nov 8 00:29:33.960794 kernel: acpiphp: Slot [19] registered Nov 8 00:29:33.960811 kernel: acpiphp: Slot [20] registered Nov 8 00:29:33.960827 kernel: acpiphp: Slot [21] registered Nov 8 00:29:33.960843 kernel: acpiphp: Slot [22] registered Nov 8 00:29:33.960860 kernel: acpiphp: Slot [23] registered Nov 8 00:29:33.960873 kernel: acpiphp: Slot [24] registered Nov 8 00:29:33.960889 kernel: acpiphp: Slot [25] registered Nov 8 00:29:33.960905 kernel: acpiphp: Slot [26] registered Nov 8 00:29:33.960925 kernel: acpiphp: Slot [27] registered Nov 8 00:29:33.960941 kernel: acpiphp: Slot [28] registered Nov 8 00:29:33.960957 kernel: acpiphp: Slot [29] registered Nov 8 00:29:33.960973 kernel: acpiphp: Slot [30] registered Nov 8 00:29:33.960989 kernel: acpiphp: Slot [31] registered Nov 8 00:29:33.961006 kernel: PCI host bridge to bus 0000:00 Nov 8 00:29:33.961173 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:29:33.961303 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:29:33.961432 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:29:33.961562 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 8 00:29:33.961689 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 8 00:29:33.961812 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:29:33.961967 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:29:33.964214 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 8 00:29:33.964410 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 8 00:29:33.964561 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 8 00:29:33.964703 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 8 00:29:33.964845 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 8 00:29:33.964987 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 8 00:29:33.965150 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 8 00:29:33.965301 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 8 00:29:33.965444 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 8 00:29:33.965598 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 8 00:29:33.965757 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Nov 8 00:29:33.965919 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 8 00:29:33.968139 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Nov 8 00:29:33.968347 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:29:33.968517 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 8 00:29:33.968665 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Nov 8 00:29:33.968814 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 8 00:29:33.968955 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Nov 8 00:29:33.968978 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:29:33.968995 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:29:33.969012 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:29:33.969028 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:29:33.969045 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:29:33.969065 kernel: iommu: Default domain type: Translated Nov 8 00:29:33.969094 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:29:33.969109 kernel: efivars: Registered efivars operations Nov 8 00:29:33.969123 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:29:33.969139 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:29:33.969154 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 8 00:29:33.969169 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 8 00:29:33.969314 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 8 00:29:33.969454 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 8 00:29:33.969593 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:29:33.969612 kernel: vgaarb: loaded Nov 8 00:29:33.969629 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 8 00:29:33.969645 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 8 00:29:33.969663 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:29:33.969679 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:29:33.969696 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:29:33.969709 kernel: pnp: PnP ACPI init Nov 8 00:29:33.969726 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:29:33.969739 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:29:33.969753 kernel: NET: Registered PF_INET protocol family Nov 8 00:29:33.969769 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:29:33.969786 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:29:33.969802 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:29:33.969818 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:29:33.969836 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:29:33.969852 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:29:33.969872 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:29:33.969889 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:29:33.969903 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:29:33.969919 kernel: NET: Registered PF_XDP protocol family Nov 8 00:29:33.970063 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:29:33.971361 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:29:33.971495 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:29:33.971624 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 8 00:29:33.971750 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 8 00:29:33.971905 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:29:33.971927 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:29:33.971945 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:29:33.971963 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:29:33.971980 kernel: clocksource: Switched to clocksource tsc Nov 8 00:29:33.971996 kernel: Initialise system trusted keyrings Nov 8 00:29:33.972013 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:29:33.972030 kernel: Key type asymmetric registered Nov 8 00:29:33.972050 kernel: Asymmetric key parser 'x509' registered Nov 8 00:29:33.972064 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:29:33.972099 kernel: io scheduler mq-deadline registered Nov 8 00:29:33.972117 kernel: io scheduler kyber registered Nov 8 00:29:33.972134 kernel: io scheduler bfq registered Nov 8 00:29:33.972150 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:29:33.972167 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:29:33.972183 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:29:33.972200 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:29:33.972221 kernel: i8042: Warning: Keylock active Nov 8 00:29:33.972238 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:29:33.972254 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:29:33.972418 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 8 00:29:33.972554 kernel: rtc_cmos 00:00: registered as rtc0 Nov 8 00:29:33.972684 kernel: rtc_cmos 00:00: setting system clock to 2025-11-08T00:29:33 UTC (1762561773) Nov 8 00:29:33.972814 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 8 00:29:33.972834 kernel: intel_pstate: CPU model not supported Nov 8 00:29:33.972855 kernel: efifb: probing for efifb Nov 8 00:29:33.972872 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Nov 8 00:29:33.972888 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 8 00:29:33.972905 kernel: efifb: scrolling: redraw Nov 8 00:29:33.972922 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:29:33.972939 kernel: Console: switching to colour frame buffer device 100x37 Nov 8 00:29:33.972955 kernel: fb0: EFI VGA frame buffer device Nov 8 00:29:33.972972 kernel: pstore: Using crash dump compression: deflate Nov 8 00:29:33.972988 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:29:33.973008 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:29:33.973025 kernel: Segment Routing with IPv6 Nov 8 00:29:33.973042 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:29:33.973058 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:29:33.976161 kernel: Key type dns_resolver registered Nov 8 00:29:33.976189 kernel: IPI shorthand broadcast: enabled Nov 8 00:29:33.976237 kernel: sched_clock: Marking stable (451002902, 129000934)->(670039292, -90035456) Nov 8 00:29:33.976258 kernel: registered taskstats version 1 Nov 8 00:29:33.976276 kernel: Loading compiled-in X.509 certificates Nov 8 00:29:33.976294 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:29:33.976308 kernel: Key type .fscrypt registered Nov 8 00:29:33.976325 kernel: Key type fscrypt-provisioning registered Nov 8 00:29:33.976344 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:29:33.976365 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:29:33.976385 kernel: ima: No architecture policies found Nov 8 00:29:33.976415 kernel: clk: Disabling unused clocks Nov 8 00:29:33.976438 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:29:33.976460 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:29:33.976487 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:29:33.976506 kernel: Run /init as init process Nov 8 00:29:33.976521 kernel: with arguments: Nov 8 00:29:33.976538 kernel: /init Nov 8 00:29:33.976554 kernel: with environment: Nov 8 00:29:33.976571 kernel: HOME=/ Nov 8 00:29:33.976590 kernel: TERM=linux Nov 8 00:29:33.976611 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:29:33.976635 systemd[1]: Detected virtualization amazon. Nov 8 00:29:33.976653 systemd[1]: Detected architecture x86-64. Nov 8 00:29:33.976669 systemd[1]: Running in initrd. Nov 8 00:29:33.976686 systemd[1]: No hostname configured, using default hostname. Nov 8 00:29:33.976703 systemd[1]: Hostname set to . Nov 8 00:29:33.976721 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:29:33.976739 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:29:33.976757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:33.976777 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:33.976796 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:29:33.976814 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:29:33.976832 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:29:33.976853 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:29:33.976877 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:29:33.976895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:29:33.976913 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:33.976931 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:33.976948 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:29:33.976966 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:29:33.976984 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:29:33.977005 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:29:33.977023 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:29:33.977040 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:29:33.977058 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:29:33.977140 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:29:33.977160 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:33.977178 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:33.977196 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:33.977213 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:29:33.977235 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:29:33.977253 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:29:33.977271 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:29:33.977289 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:29:33.977307 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:29:33.977324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:29:33.977342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:33.977360 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:29:33.977378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:33.977435 systemd-journald[178]: Collecting audit messages is disabled. Nov 8 00:29:33.977474 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:29:33.977497 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:29:33.977515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:33.977534 systemd-journald[178]: Journal started Nov 8 00:29:33.977570 systemd-journald[178]: Runtime Journal (/run/log/journal/ec228a22c1a4e814b7901348a51cdc86) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:29:33.948403 systemd-modules-load[179]: Inserted module 'overlay' Nov 8 00:29:33.998100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:33.999105 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:29:34.003262 kernel: Bridge firewalling registered Nov 8 00:29:34.003313 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:29:34.005387 systemd-modules-load[179]: Inserted module 'br_netfilter' Nov 8 00:29:34.010437 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:29:34.010359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:34.011327 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:29:34.022382 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:29:34.026252 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:29:34.030033 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:29:34.034330 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:34.041315 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:29:34.042236 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:34.052530 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:34.061171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:34.067796 dracut-cmdline[207]: dracut-dracut-053 Nov 8 00:29:34.072180 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:34.069311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:29:34.123004 systemd-resolved[219]: Positive Trust Anchors: Nov 8 00:29:34.123024 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:29:34.123105 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:29:34.131611 systemd-resolved[219]: Defaulting to hostname 'linux'. Nov 8 00:29:34.134441 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:29:34.135903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:34.162116 kernel: SCSI subsystem initialized Nov 8 00:29:34.173105 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:29:34.185113 kernel: iscsi: registered transport (tcp) Nov 8 00:29:34.206290 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:29:34.206367 kernel: QLogic iSCSI HBA Driver Nov 8 00:29:34.245606 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:29:34.250310 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:29:34.278615 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:29:34.278691 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:29:34.278728 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:29:34.322134 kernel: raid6: avx512x4 gen() 18276 MB/s Nov 8 00:29:34.340107 kernel: raid6: avx512x2 gen() 18013 MB/s Nov 8 00:29:34.358105 kernel: raid6: avx512x1 gen() 17985 MB/s Nov 8 00:29:34.376109 kernel: raid6: avx2x4 gen() 17790 MB/s Nov 8 00:29:34.394107 kernel: raid6: avx2x2 gen() 17819 MB/s Nov 8 00:29:34.413616 kernel: raid6: avx2x1 gen() 13643 MB/s Nov 8 00:29:34.413673 kernel: raid6: using algorithm avx512x4 gen() 18276 MB/s Nov 8 00:29:34.433221 kernel: raid6: .... xor() 7693 MB/s, rmw enabled Nov 8 00:29:34.433278 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:29:34.456118 kernel: xor: automatically using best checksumming function avx Nov 8 00:29:34.622112 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:29:34.632724 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:29:34.638294 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:34.653895 systemd-udevd[396]: Using default interface naming scheme 'v255'. Nov 8 00:29:34.659015 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:34.669354 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:29:34.685604 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Nov 8 00:29:34.715283 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:29:34.719380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:29:34.771889 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:34.781365 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:29:34.809982 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:29:34.813792 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:29:34.815426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:34.817185 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:29:34.825336 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:29:34.846497 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:29:34.875630 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 8 00:29:34.875881 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 8 00:29:34.881122 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 8 00:29:34.890119 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:53:4a:3a:03:af Nov 8 00:29:34.897141 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:29:34.896865 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:29:34.897033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:34.898885 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:34.900139 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:34.900347 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:34.900424 (udev-worker)[453]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:29:34.902511 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:34.911505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:34.929442 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:29:34.929497 kernel: AES CTR mode by8 optimization enabled Nov 8 00:29:34.930731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:34.931540 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:34.941405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:34.952252 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 8 00:29:34.956123 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 8 00:29:34.969571 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 8 00:29:34.976630 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:29:34.976697 kernel: GPT:9289727 != 33554431 Nov 8 00:29:34.976719 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:29:34.976739 kernel: GPT:9289727 != 33554431 Nov 8 00:29:34.976757 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:29:34.976777 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:29:34.982167 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:34.988357 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:35.012615 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:35.046101 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (443) Nov 8 00:29:35.057129 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (454) Nov 8 00:29:35.101432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:29:35.112140 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 8 00:29:35.123446 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 8 00:29:35.133587 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 8 00:29:35.134126 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 8 00:29:35.140256 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:29:35.146153 disk-uuid[627]: Primary Header is updated. Nov 8 00:29:35.146153 disk-uuid[627]: Secondary Entries is updated. Nov 8 00:29:35.146153 disk-uuid[627]: Secondary Header is updated. Nov 8 00:29:35.152096 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:29:35.157099 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:29:35.162113 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:29:36.165172 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 8 00:29:36.166876 disk-uuid[628]: The operation has completed successfully. Nov 8 00:29:36.305090 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:29:36.305217 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:29:36.322284 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:29:36.327280 sh[970]: Success Nov 8 00:29:36.348101 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:29:36.431488 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:29:36.438178 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:29:36.440843 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:29:36.463232 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:29:36.463295 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:36.465394 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:29:36.469363 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:29:36.469424 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:29:36.569139 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:29:36.571856 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:29:36.572985 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:29:36.579349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:29:36.582275 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:29:36.607697 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:36.607769 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:36.611682 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:29:36.623109 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:29:36.636055 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:29:36.638358 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:36.645740 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:29:36.653391 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:29:36.694430 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:29:36.708823 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:29:36.738143 systemd-networkd[1162]: lo: Link UP Nov 8 00:29:36.738154 systemd-networkd[1162]: lo: Gained carrier Nov 8 00:29:36.739877 systemd-networkd[1162]: Enumeration completed Nov 8 00:29:36.740368 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:36.740373 systemd-networkd[1162]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:36.742657 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:29:36.748310 systemd-networkd[1162]: eth0: Link UP Nov 8 00:29:36.748316 systemd-networkd[1162]: eth0: Gained carrier Nov 8 00:29:36.748329 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:36.748548 systemd[1]: Reached target network.target - Network. Nov 8 00:29:36.763188 systemd-networkd[1162]: eth0: DHCPv4 address 172.31.22.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:29:36.996294 ignition[1099]: Ignition 2.19.0 Nov 8 00:29:36.996305 ignition[1099]: Stage: fetch-offline Nov 8 00:29:36.996608 ignition[1099]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:36.996617 ignition[1099]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:29:36.997258 ignition[1099]: Ignition finished successfully Nov 8 00:29:36.998224 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:29:37.007359 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:29:37.022985 ignition[1171]: Ignition 2.19.0 Nov 8 00:29:37.022999 ignition[1171]: Stage: fetch Nov 8 00:29:37.023498 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:37.023512 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:29:37.023644 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:29:37.032315 ignition[1171]: PUT result: OK Nov 8 00:29:37.034061 ignition[1171]: parsed url from cmdline: "" Nov 8 00:29:37.034072 ignition[1171]: no config URL provided Nov 8 00:29:37.034095 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:29:37.034111 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:29:37.034140 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:29:37.034705 ignition[1171]: PUT result: OK Nov 8 00:29:37.034771 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 8 00:29:37.035424 ignition[1171]: GET result: OK Nov 8 00:29:37.035518 ignition[1171]: parsing config with SHA512: 4cbd66e80be75e7f82da92e44f4566a0f391010cee4fd2fc0871be612ac8b68f930e0dcbc5bb1eb5583b4b4a1d8168679b6193dea90fcd386e5a757d23b3fdc7 Nov 8 00:29:37.041404 unknown[1171]: fetched base config from "system" Nov 8 00:29:37.041429 unknown[1171]: fetched base config from "system" Nov 8 00:29:37.041438 unknown[1171]: fetched user config from "aws" Nov 8 00:29:37.044855 ignition[1171]: fetch: fetch complete Nov 8 00:29:37.044870 ignition[1171]: fetch: fetch passed Nov 8 00:29:37.044945 ignition[1171]: Ignition finished successfully Nov 8 00:29:37.047144 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:29:37.052342 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:29:37.069725 ignition[1177]: Ignition 2.19.0 Nov 8 00:29:37.069741 ignition[1177]: Stage: kargs Nov 8 00:29:37.070260 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:37.070275 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:29:37.070393 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:29:37.072496 ignition[1177]: PUT result: OK Nov 8 00:29:37.075443 ignition[1177]: kargs: kargs passed Nov 8 00:29:37.075509 ignition[1177]: Ignition finished successfully Nov 8 00:29:37.076820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:29:37.083286 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:29:37.095798 ignition[1183]: Ignition 2.19.0 Nov 8 00:29:37.095810 ignition[1183]: Stage: disks Nov 8 00:29:37.096621 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:37.096633 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:29:37.096738 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:29:37.099333 ignition[1183]: PUT result: OK Nov 8 00:29:37.109252 ignition[1183]: disks: disks passed Nov 8 00:29:37.109340 ignition[1183]: Ignition finished successfully Nov 8 00:29:37.111191 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:29:37.111823 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:29:37.112232 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:29:37.112869 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:29:37.113729 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:29:37.114322 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:29:37.119288 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:29:37.147605 systemd-fsck[1191]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:29:37.150395 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:29:37.155189 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:29:37.252096 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:29:37.252521 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:29:37.253466 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:29:37.265198 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:29:37.267179 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:29:37.268625 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:29:37.268675 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:29:37.268698 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:29:37.279857 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:29:37.284117 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1210) Nov 8 00:29:37.288294 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:37.288350 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:37.290833 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:29:37.292275 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:29:37.298105 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:29:37.299917 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:29:37.581956 initrd-setup-root[1235]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:29:37.596945 initrd-setup-root[1242]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:29:37.602103 initrd-setup-root[1249]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:29:37.607142 initrd-setup-root[1256]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:29:37.852880 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:29:37.859185 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:29:37.861778 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:29:37.868257 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:29:37.870156 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:37.895549 ignition[1324]: INFO : Ignition 2.19.0 Nov 8 00:29:37.895549 ignition[1324]: INFO : Stage: mount Nov 8 00:29:37.896630 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:37.896630 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:29:37.896630 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:29:37.897723 ignition[1324]: INFO : PUT result: OK Nov 8 00:29:37.899664 ignition[1324]: INFO : mount: mount passed Nov 8 00:29:37.899664 ignition[1324]: INFO : Ignition finished successfully Nov 8 00:29:37.901154 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:29:37.905214 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:29:37.907642 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:29:37.918274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:29:37.935106 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1335) Nov 8 00:29:37.939523 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:37.939590 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:37.939624 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 8 00:29:37.947098 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 8 00:29:37.947675 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:29:37.969703 ignition[1352]: INFO : Ignition 2.19.0 Nov 8 00:29:37.969703 ignition[1352]: INFO : Stage: files Nov 8 00:29:37.970993 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:37.970993 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:29:37.970993 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:29:37.972328 ignition[1352]: INFO : PUT result: OK Nov 8 00:29:37.974752 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:29:37.975997 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:29:37.975997 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:29:38.015680 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:29:38.016502 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:29:38.016502 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:29:38.016107 unknown[1352]: wrote ssh authorized keys file for user: core Nov 8 00:29:38.031690 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:29:38.032551 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:29:38.102787 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:29:38.294221 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:29:38.294221 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:29:38.295958 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 8 00:29:38.500166 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:29:38.510338 systemd-networkd[1162]: eth0: Gained IPv6LL Nov 8 00:29:38.616663 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:29:38.616663 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:29:38.618913 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:29:38.891639 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:29:39.308934 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:29:39.308934 ignition[1352]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:29:39.311655 ignition[1352]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:29:39.313018 ignition[1352]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:29:39.313018 ignition[1352]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:29:39.313018 ignition[1352]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:29:39.313018 ignition[1352]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:29:39.313018 ignition[1352]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:29:39.313018 ignition[1352]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:29:39.313018 ignition[1352]: INFO : files: files passed Nov 8 00:29:39.313018 ignition[1352]: INFO : Ignition finished successfully Nov 8 00:29:39.314409 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:29:39.322284 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:29:39.326111 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:29:39.329313 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:29:39.330136 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:29:39.348855 initrd-setup-root-after-ignition[1380]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:39.348855 initrd-setup-root-after-ignition[1380]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:39.352845 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:39.353241 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:29:39.354878 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:29:39.360294 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:29:39.391926 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:29:39.392094 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:29:39.393391 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:29:39.394444 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:29:39.395261 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:29:39.401254 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:29:39.413951 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:29:39.422334 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:29:39.433772 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:39.434451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:39.435413 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:29:39.436220 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:29:39.436444 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:29:39.437620 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:29:39.438460 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:29:39.439243 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:29:39.440001 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:29:39.440838 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:29:39.441618 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:29:39.442397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:29:39.443189 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:29:39.444346 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:29:39.445172 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:29:39.445876 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:29:39.446052 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:29:39.447170 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:39.447951 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:39.448711 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:29:39.448851 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:39.449511 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:29:39.449675 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:29:39.451034 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:29:39.451230 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:29:39.451938 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:29:39.452108 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:29:39.457375 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:29:39.458610 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:29:39.458800 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:39.462351 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:29:39.462930 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:29:39.463178 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:39.466390 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:29:39.466601 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:29:39.479628 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:29:39.479765 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:29:39.486037 ignition[1404]: INFO : Ignition 2.19.0 Nov 8 00:29:39.486037 ignition[1404]: INFO : Stage: umount Nov 8 00:29:39.490495 ignition[1404]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:39.490495 ignition[1404]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 8 00:29:39.490495 ignition[1404]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 8 00:29:39.490495 ignition[1404]: INFO : PUT result: OK Nov 8 00:29:39.490495 ignition[1404]: INFO : umount: umount passed Nov 8 00:29:39.496247 ignition[1404]: INFO : Ignition finished successfully Nov 8 00:29:39.491937 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:29:39.492095 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:29:39.493376 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:29:39.493480 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:29:39.497834 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:29:39.497903 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:29:39.498438 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:29:39.498494 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:29:39.498989 systemd[1]: Stopped target network.target - Network. Nov 8 00:29:39.499455 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:29:39.499513 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:29:39.499997 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:29:39.500455 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:29:39.504831 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:39.505838 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:29:39.506133 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:29:39.506731 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:29:39.506776 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:29:39.507193 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:29:39.507245 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:29:39.507754 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:29:39.507832 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:29:39.508370 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:29:39.508513 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:29:39.509211 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:29:39.509783 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:29:39.511904 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:29:39.513185 systemd-networkd[1162]: eth0: DHCPv6 lease lost Nov 8 00:29:39.514728 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:29:39.514848 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:29:39.516010 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:29:39.516621 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:39.524219 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:29:39.524677 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:29:39.524737 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:29:39.525259 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:39.526011 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:29:39.529630 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:29:39.533864 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:29:39.533977 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:39.536061 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:29:39.536143 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:39.537197 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:29:39.537254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:39.543446 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:29:39.544522 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:39.546540 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:29:39.546645 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:29:39.549195 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:29:39.549277 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:39.550030 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:29:39.550092 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:39.550722 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:29:39.550781 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:29:39.551795 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:29:39.551854 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:29:39.554284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:29:39.554354 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:39.562258 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:29:39.562827 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:29:39.562894 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:39.563557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:39.563616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:39.571276 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:29:39.571398 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:29:39.618365 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:29:39.618478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:29:39.619791 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:29:39.620784 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:29:39.620858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:29:39.626281 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:29:39.635151 systemd[1]: Switching root. Nov 8 00:29:39.663874 systemd-journald[178]: Journal stopped Nov 8 00:29:41.064307 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Nov 8 00:29:41.064368 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:29:41.064392 kernel: SELinux: policy capability open_perms=1 Nov 8 00:29:41.064404 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:29:41.064416 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:29:41.064427 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:29:41.064442 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:29:41.064454 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:29:41.064469 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:29:41.064481 kernel: audit: type=1403 audit(1762561780.057:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:29:41.064494 systemd[1]: Successfully loaded SELinux policy in 63.275ms. Nov 8 00:29:41.064520 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.766ms. Nov 8 00:29:41.064534 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:29:41.064546 systemd[1]: Detected virtualization amazon. Nov 8 00:29:41.064562 systemd[1]: Detected architecture x86-64. Nov 8 00:29:41.064575 systemd[1]: Detected first boot. Nov 8 00:29:41.064588 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:29:41.064600 zram_generator::config[1446]: No configuration found. Nov 8 00:29:41.064618 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:29:41.064631 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:29:41.064644 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:29:41.064657 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:29:41.064672 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:29:41.064685 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:29:41.064697 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:29:41.064714 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:29:41.064727 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:29:41.064739 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:29:41.064756 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:29:41.064769 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:29:41.064782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:41.064797 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:41.064810 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:29:41.064822 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:29:41.064835 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:29:41.064848 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:29:41.064860 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:29:41.064873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:41.064885 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:29:41.064898 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:29:41.064913 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:29:41.064949 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:29:41.064961 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:41.064974 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:29:41.064988 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:29:41.065001 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:29:41.065013 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:29:41.065027 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:29:41.065042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:41.065055 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:41.065067 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:41.065111 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:29:41.065124 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:29:41.065137 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:29:41.065149 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:29:41.065162 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:41.065174 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:29:41.065190 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:29:41.065202 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:29:41.065215 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:29:41.065227 systemd[1]: Reached target machines.target - Containers. Nov 8 00:29:41.065239 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:29:41.065252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:41.065265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:29:41.065278 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:29:41.065293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:41.065305 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:29:41.065317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:29:41.065330 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:29:41.065343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:29:41.065357 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:29:41.065369 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:29:41.065382 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:29:41.065397 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:29:41.065409 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:29:41.065421 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:29:41.065438 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:29:41.065450 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:29:41.065463 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:29:41.065475 kernel: loop: module loaded Nov 8 00:29:41.065486 kernel: fuse: init (API version 7.39) Nov 8 00:29:41.065498 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:29:41.065511 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:29:41.065526 systemd[1]: Stopped verity-setup.service. Nov 8 00:29:41.065538 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:41.065551 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:29:41.065563 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:29:41.065575 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:29:41.065590 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:29:41.065602 kernel: ACPI: bus type drm_connector registered Nov 8 00:29:41.065615 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:29:41.065628 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:29:41.065640 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:41.065652 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:29:41.065665 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:29:41.065677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:41.065711 systemd-journald[1535]: Collecting audit messages is disabled. Nov 8 00:29:41.065736 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:41.065749 systemd-journald[1535]: Journal started Nov 8 00:29:41.065773 systemd-journald[1535]: Runtime Journal (/run/log/journal/ec228a22c1a4e814b7901348a51cdc86) is 4.7M, max 38.2M, 33.4M free. Nov 8 00:29:40.702408 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:29:40.749185 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 8 00:29:40.749599 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:29:41.069147 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:29:41.070197 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:29:41.070799 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:29:41.070918 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:29:41.071507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:29:41.071621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:29:41.072217 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:29:41.072343 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:29:41.072909 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:29:41.073043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:29:41.073661 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:29:41.074426 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:29:41.090936 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:29:41.097844 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:29:41.105322 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:29:41.106890 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:29:41.107032 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:29:41.109830 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:29:41.115290 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:29:41.125337 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:29:41.127376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:41.137265 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:29:41.139416 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:29:41.140069 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:29:41.148797 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:29:41.150219 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:29:41.152736 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:29:41.174295 systemd-journald[1535]: Time spent on flushing to /var/log/journal/ec228a22c1a4e814b7901348a51cdc86 is 147.487ms for 980 entries. Nov 8 00:29:41.174295 systemd-journald[1535]: System Journal (/var/log/journal/ec228a22c1a4e814b7901348a51cdc86) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:29:41.330640 systemd-journald[1535]: Received client request to flush runtime journal. Nov 8 00:29:41.330839 kernel: loop0: detected capacity change from 0 to 61336 Nov 8 00:29:41.164336 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:29:41.171152 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:41.171929 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:29:41.172640 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:29:41.173443 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:29:41.177838 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:41.192329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:29:41.199843 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:29:41.211530 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:29:41.214433 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:29:41.233363 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:29:41.289716 udevadm[1582]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:29:41.297014 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:29:41.302180 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:29:41.326346 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:41.339889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:29:41.353498 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:29:41.361192 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:29:41.376806 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:29:41.387228 kernel: loop1: detected capacity change from 0 to 224512 Nov 8 00:29:41.419777 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Nov 8 00:29:41.420029 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Nov 8 00:29:41.430281 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:41.502105 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:29:41.609109 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:29:41.713296 kernel: loop4: detected capacity change from 0 to 61336 Nov 8 00:29:41.735110 kernel: loop5: detected capacity change from 0 to 224512 Nov 8 00:29:41.768113 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:29:41.797111 kernel: loop7: detected capacity change from 0 to 140768 Nov 8 00:29:41.816930 (sd-merge)[1601]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 8 00:29:41.817626 (sd-merge)[1601]: Merged extensions into '/usr'. Nov 8 00:29:41.825058 systemd[1]: Reloading requested from client PID 1574 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:29:41.825088 systemd[1]: Reloading... Nov 8 00:29:41.943106 zram_generator::config[1624]: No configuration found. Nov 8 00:29:42.152153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:42.215557 systemd[1]: Reloading finished in 389 ms. Nov 8 00:29:42.247296 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:29:42.249418 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:29:42.257326 systemd[1]: Starting ensure-sysext.service... Nov 8 00:29:42.259931 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:29:42.265546 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:42.287175 systemd[1]: Reloading requested from client PID 1679 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:29:42.287496 systemd[1]: Reloading... Nov 8 00:29:42.303974 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:29:42.305689 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:29:42.308814 systemd-tmpfiles[1680]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:29:42.310434 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Nov 8 00:29:42.310671 systemd-tmpfiles[1680]: ACLs are not supported, ignoring. Nov 8 00:29:42.320873 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:29:42.321031 systemd-tmpfiles[1680]: Skipping /boot Nov 8 00:29:42.336413 systemd-tmpfiles[1680]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:29:42.336581 systemd-tmpfiles[1680]: Skipping /boot Nov 8 00:29:42.354586 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Nov 8 00:29:42.418111 zram_generator::config[1704]: No configuration found. Nov 8 00:29:42.577930 (udev-worker)[1739]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:29:42.593070 ldconfig[1570]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:29:42.699629 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:42.708147 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:29:42.712105 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1748) Nov 8 00:29:42.716106 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 8 00:29:42.749098 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:29:42.753105 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Nov 8 00:29:42.782640 kernel: ACPI: button: Sleep Button [SLPF] Nov 8 00:29:42.820121 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:29:42.820993 systemd[1]: Reloading finished in 532 ms. Nov 8 00:29:42.840106 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:29:42.844912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:42.846713 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:29:42.850031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:42.891330 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:29:42.904204 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:29:42.907290 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:29:42.916331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:29:42.920622 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:29:42.930580 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:29:42.966338 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:42.966654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:42.974468 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:42.978374 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:29:42.983401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:29:42.984597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:42.984776 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:43.001501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:43.001875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:43.009432 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:29:43.010126 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:29:43.013635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:43.013942 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:29:43.014746 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:43.023147 systemd[1]: Finished ensure-sysext.service. Nov 8 00:29:43.036720 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:29:43.084394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:43.084629 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:43.106950 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:29:43.108208 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:29:43.109758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:29:43.109971 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:29:43.123070 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:29:43.133105 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:29:43.133305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:29:43.151728 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:29:43.151909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:29:43.158479 augenrules[1902]: No rules Nov 8 00:29:43.163643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:43.165748 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:29:43.166778 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:29:43.171805 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:29:43.182486 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:29:43.183723 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:29:43.202291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 8 00:29:43.212313 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:29:43.214441 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:29:43.215459 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:29:43.228529 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:29:43.229310 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:29:43.247833 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:29:43.270262 lvm[1917]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:29:43.316538 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:29:43.318038 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:43.332868 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:29:43.359647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:43.364820 lvm[1928]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:29:43.369298 systemd-networkd[1864]: lo: Link UP Nov 8 00:29:43.369309 systemd-networkd[1864]: lo: Gained carrier Nov 8 00:29:43.373218 systemd-networkd[1864]: Enumeration completed Nov 8 00:29:43.373365 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:29:43.373712 systemd-networkd[1864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:43.373717 systemd-networkd[1864]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:43.378448 systemd-networkd[1864]: eth0: Link UP Nov 8 00:29:43.378652 systemd-networkd[1864]: eth0: Gained carrier Nov 8 00:29:43.378678 systemd-networkd[1864]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:43.383296 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:29:43.386253 systemd-resolved[1866]: Positive Trust Anchors: Nov 8 00:29:43.386609 systemd-resolved[1866]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:29:43.386728 systemd-resolved[1866]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:29:43.389202 systemd-networkd[1864]: eth0: DHCPv4 address 172.31.22.136/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 8 00:29:43.404293 systemd-resolved[1866]: Defaulting to hostname 'linux'. Nov 8 00:29:43.405543 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:29:43.407451 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:29:43.407963 systemd[1]: Reached target network.target - Network. Nov 8 00:29:43.408431 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:43.408798 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:29:43.409249 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:29:43.409614 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:29:43.410204 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:29:43.410638 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:29:43.410974 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:29:43.411317 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:29:43.411347 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:29:43.411644 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:29:43.413618 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:29:43.415445 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:29:43.423271 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:29:43.424471 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:29:43.425029 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:29:43.425505 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:29:43.425957 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:29:43.426001 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:29:43.427206 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:29:43.430302 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:29:43.437310 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:29:43.441259 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:29:43.444308 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:29:43.445561 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:29:43.452350 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:29:43.456582 systemd[1]: Started ntpd.service - Network Time Service. Nov 8 00:29:43.481274 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:29:43.495184 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 8 00:29:43.502143 jq[1939]: false Nov 8 00:29:43.505887 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:29:43.509027 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:29:43.521710 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:29:43.523236 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:29:43.523887 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:29:43.538608 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:29:43.543436 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:29:43.548675 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:29:43.549208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:29:43.549607 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:29:43.549814 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:29:43.562571 jq[1952]: true Nov 8 00:29:43.596881 extend-filesystems[1940]: Found loop4 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found loop5 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found loop6 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found loop7 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found nvme0n1 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found nvme0n1p1 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found nvme0n1p2 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found nvme0n1p3 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found usr Nov 8 00:29:43.596881 extend-filesystems[1940]: Found nvme0n1p4 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found nvme0n1p6 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found nvme0n1p7 Nov 8 00:29:43.596881 extend-filesystems[1940]: Found nvme0n1p9 Nov 8 00:29:43.596881 extend-filesystems[1940]: Checking size of /dev/nvme0n1p9 Nov 8 00:29:43.610959 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:29:43.689109 extend-filesystems[1940]: Resized partition /dev/nvme0n1p9 Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: ---------------------------------------------------- Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: corporation. Support and training for ntp-4 are Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: available at https://www.nwtime.org/support Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: ---------------------------------------------------- Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: proto: precision = 0.063 usec (-24) Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: basedate set to 2025-10-26 Nov 8 00:29:43.689677 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: gps base set to 2025-10-26 (week 2390) Nov 8 00:29:43.702454 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 8 00:29:43.702522 update_engine[1951]: I20251108 00:29:43.621577 1951 main.cc:92] Flatcar Update Engine starting Nov 8 00:29:43.702522 update_engine[1951]: I20251108 00:29:43.637012 1951 update_check_scheduler.cc:74] Next update check in 3m52s Nov 8 00:29:43.610705 dbus-daemon[1938]: [system] SELinux support is enabled Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.628 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.641 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.649 INFO Fetch successful Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.650 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.653 INFO Fetch successful Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.653 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.657 INFO Fetch successful Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.657 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.666 INFO Fetch successful Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.677 INFO Fetch failed with 404: resource not found Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.680 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.680 INFO Fetch successful Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.681 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.687 INFO Fetch successful Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.688 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.690 INFO Fetch successful Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.690 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.691 INFO Fetch successful Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.691 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 8 00:29:43.703004 coreos-metadata[1937]: Nov 08 00:29:43.691 INFO Fetch successful Nov 8 00:29:43.616429 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:29:43.724113 extend-filesystems[1980]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: Listen normally on 3 eth0 172.31.22.136:123 Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: Listen normally on 4 lo [::1]:123 Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: bind(21) AF_INET6 fe80::453:4aff:fe3a:3af%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: unable to create socket on eth0 (5) for fe80::453:4aff:fe3a:3af%2#123 Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: failed to init interface for address fe80::453:4aff:fe3a:3af%2 Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:29:43.724949 ntpd[1942]: 8 Nov 00:29:43 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:29:43.627424 dbus-daemon[1938]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1864 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:29:43.616466 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:29:43.653840 ntpd[1942]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:29:43.617059 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:29:43.653864 ntpd[1942]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:29:43.617113 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:29:43.653875 ntpd[1942]: ---------------------------------------------------- Nov 8 00:29:43.646648 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:29:43.740596 tar[1959]: linux-amd64/LICENSE Nov 8 00:29:43.740596 tar[1959]: linux-amd64/helm Nov 8 00:29:43.653885 ntpd[1942]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:29:43.651859 (ntainerd)[1971]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:29:43.653895 ntpd[1942]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:29:43.657496 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:29:43.748500 jq[1958]: true Nov 8 00:29:43.653905 ntpd[1942]: corporation. Support and training for ntp-4 are Nov 8 00:29:43.674333 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:29:43.653914 ntpd[1942]: available at https://www.nwtime.org/support Nov 8 00:29:43.678477 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:29:43.653924 ntpd[1942]: ---------------------------------------------------- Nov 8 00:29:43.679550 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:29:43.666845 ntpd[1942]: proto: precision = 0.063 usec (-24) Nov 8 00:29:43.667219 ntpd[1942]: basedate set to 2025-10-26 Nov 8 00:29:43.667236 ntpd[1942]: gps base set to 2025-10-26 (week 2390) Nov 8 00:29:43.693846 ntpd[1942]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:29:43.693912 ntpd[1942]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:29:43.707023 ntpd[1942]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:29:43.707073 ntpd[1942]: Listen normally on 3 eth0 172.31.22.136:123 Nov 8 00:29:43.707140 ntpd[1942]: Listen normally on 4 lo [::1]:123 Nov 8 00:29:43.707198 ntpd[1942]: bind(21) AF_INET6 fe80::453:4aff:fe3a:3af%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:29:43.707235 ntpd[1942]: unable to create socket on eth0 (5) for fe80::453:4aff:fe3a:3af%2#123 Nov 8 00:29:43.707253 ntpd[1942]: failed to init interface for address fe80::453:4aff:fe3a:3af%2 Nov 8 00:29:43.707292 ntpd[1942]: Listening on routing socket on fd #21 for interface updates Nov 8 00:29:43.723583 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:29:43.723615 ntpd[1942]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:29:43.779018 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 8 00:29:43.807699 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:29:43.811709 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:29:43.851116 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 8 00:29:43.886257 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1747) Nov 8 00:29:43.886344 extend-filesystems[1980]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 8 00:29:43.886344 extend-filesystems[1980]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 8 00:29:43.886344 extend-filesystems[1980]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 8 00:29:43.906790 extend-filesystems[1940]: Resized filesystem in /dev/nvme0n1p9 Nov 8 00:29:43.888786 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:29:43.889036 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:29:44.024155 bash[2034]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:29:44.029107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:29:44.038322 systemd[1]: Starting sshkeys.service... Nov 8 00:29:44.051742 systemd-logind[1950]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:29:44.060158 systemd-logind[1950]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 8 00:29:44.060190 systemd-logind[1950]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:29:44.061742 systemd-logind[1950]: New seat seat0. Nov 8 00:29:44.065864 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:29:44.093600 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:29:44.106557 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:29:44.195050 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:29:44.196189 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:29:44.218826 dbus-daemon[1938]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1979 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:29:44.240347 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:29:44.289539 coreos-metadata[2086]: Nov 08 00:29:44.289 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 8 00:29:44.290719 coreos-metadata[2086]: Nov 08 00:29:44.290 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 8 00:29:44.291459 coreos-metadata[2086]: Nov 08 00:29:44.291 INFO Fetch successful Nov 8 00:29:44.291603 coreos-metadata[2086]: Nov 08 00:29:44.291 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 00:29:44.295101 coreos-metadata[2086]: Nov 08 00:29:44.294 INFO Fetch successful Nov 8 00:29:44.299887 polkitd[2125]: Started polkitd version 121 Nov 8 00:29:44.300644 unknown[2086]: wrote ssh authorized keys file for user: core Nov 8 00:29:44.317504 locksmithd[1981]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:29:44.340579 polkitd[2125]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:29:44.340661 polkitd[2125]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:29:44.349485 polkitd[2125]: Finished loading, compiling and executing 2 rules Nov 8 00:29:44.350310 dbus-daemon[1938]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:29:44.350534 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:29:44.353136 polkitd[2125]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:29:44.363394 update-ssh-keys[2131]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:29:44.360842 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:29:44.363739 systemd[1]: Finished sshkeys.service. Nov 8 00:29:44.400950 systemd-hostnamed[1979]: Hostname set to (transient) Nov 8 00:29:44.401097 systemd-resolved[1866]: System hostname changed to 'ip-172-31-22-136'. Nov 8 00:29:44.526740 containerd[1971]: time="2025-11-08T00:29:44.525848985Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:29:44.630590 containerd[1971]: time="2025-11-08T00:29:44.630465828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:44.636776 containerd[1971]: time="2025-11-08T00:29:44.636504171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:44.636776 containerd[1971]: time="2025-11-08T00:29:44.636556193Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:29:44.636776 containerd[1971]: time="2025-11-08T00:29:44.636584956Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:29:44.636776 containerd[1971]: time="2025-11-08T00:29:44.636764594Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:29:44.636776 containerd[1971]: time="2025-11-08T00:29:44.636784312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:44.637465 containerd[1971]: time="2025-11-08T00:29:44.636863150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:44.637465 containerd[1971]: time="2025-11-08T00:29:44.636884620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:44.637465 containerd[1971]: time="2025-11-08T00:29:44.637177647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:44.637465 containerd[1971]: time="2025-11-08T00:29:44.637205536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:44.637465 containerd[1971]: time="2025-11-08T00:29:44.637228677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:44.637465 containerd[1971]: time="2025-11-08T00:29:44.637244741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:44.637465 containerd[1971]: time="2025-11-08T00:29:44.637349420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:44.637759 containerd[1971]: time="2025-11-08T00:29:44.637610032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:44.639697 containerd[1971]: time="2025-11-08T00:29:44.638906101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:44.639697 containerd[1971]: time="2025-11-08T00:29:44.638934988Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:29:44.639697 containerd[1971]: time="2025-11-08T00:29:44.639055682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:29:44.639697 containerd[1971]: time="2025-11-08T00:29:44.639149708Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:29:44.644757 containerd[1971]: time="2025-11-08T00:29:44.644724446Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:29:44.644995 containerd[1971]: time="2025-11-08T00:29:44.644975327Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:29:44.646455 containerd[1971]: time="2025-11-08T00:29:44.645497931Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:29:44.646455 containerd[1971]: time="2025-11-08T00:29:44.645527861Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:29:44.646455 containerd[1971]: time="2025-11-08T00:29:44.645553810Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:29:44.646455 containerd[1971]: time="2025-11-08T00:29:44.645719165Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:29:44.647963 containerd[1971]: time="2025-11-08T00:29:44.647938258Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:29:44.648609 containerd[1971]: time="2025-11-08T00:29:44.648586815Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:29:44.649136 containerd[1971]: time="2025-11-08T00:29:44.649115614Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:29:44.649220 containerd[1971]: time="2025-11-08T00:29:44.649205804Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.649988473Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650021911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650042214Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650063331Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650099039Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650119478Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650148162Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650165939Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650192574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650210946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650228102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650247123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650266694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.651538 containerd[1971]: time="2025-11-08T00:29:44.650284504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650300646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650320358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650338286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650363016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650379142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650400459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650420358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650443779Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650474491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650491545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650518327Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650581529Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650610819Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:29:44.652090 containerd[1971]: time="2025-11-08T00:29:44.650627799Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:29:44.652588 containerd[1971]: time="2025-11-08T00:29:44.650644965Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:29:44.652588 containerd[1971]: time="2025-11-08T00:29:44.650659400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.652588 containerd[1971]: time="2025-11-08T00:29:44.650676685Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:29:44.652588 containerd[1971]: time="2025-11-08T00:29:44.650689925Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:29:44.652588 containerd[1971]: time="2025-11-08T00:29:44.650703036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:29:44.654302 ntpd[1942]: bind(24) AF_INET6 fe80::453:4aff:fe3a:3af%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:29:44.654758 ntpd[1942]: 8 Nov 00:29:44 ntpd[1942]: bind(24) AF_INET6 fe80::453:4aff:fe3a:3af%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:29:44.654758 ntpd[1942]: 8 Nov 00:29:44 ntpd[1942]: unable to create socket on eth0 (6) for fe80::453:4aff:fe3a:3af%2#123 Nov 8 00:29:44.654758 ntpd[1942]: 8 Nov 00:29:44 ntpd[1942]: failed to init interface for address fe80::453:4aff:fe3a:3af%2 Nov 8 00:29:44.654342 ntpd[1942]: unable to create socket on eth0 (6) for fe80::453:4aff:fe3a:3af%2#123 Nov 8 00:29:44.654381 ntpd[1942]: failed to init interface for address fe80::453:4aff:fe3a:3af%2 Nov 8 00:29:44.655464 containerd[1971]: time="2025-11-08T00:29:44.655124064Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:29:44.656140 containerd[1971]: time="2025-11-08T00:29:44.655735007Z" level=info msg="Connect containerd service" Nov 8 00:29:44.656280 containerd[1971]: time="2025-11-08T00:29:44.656254380Z" level=info msg="using legacy CRI server" Nov 8 00:29:44.656764 containerd[1971]: time="2025-11-08T00:29:44.656339856Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:29:44.656945 containerd[1971]: time="2025-11-08T00:29:44.656927618Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:29:44.662518 containerd[1971]: time="2025-11-08T00:29:44.662478977Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:29:44.663018 containerd[1971]: time="2025-11-08T00:29:44.662942263Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:29:44.663018 containerd[1971]: time="2025-11-08T00:29:44.663008955Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:29:44.663148 containerd[1971]: time="2025-11-08T00:29:44.663054798Z" level=info msg="Start subscribing containerd event" Nov 8 00:29:44.663148 containerd[1971]: time="2025-11-08T00:29:44.663113753Z" level=info msg="Start recovering state" Nov 8 00:29:44.663221 containerd[1971]: time="2025-11-08T00:29:44.663198819Z" level=info msg="Start event monitor" Nov 8 00:29:44.663260 containerd[1971]: time="2025-11-08T00:29:44.663218104Z" level=info msg="Start snapshots syncer" Nov 8 00:29:44.663260 containerd[1971]: time="2025-11-08T00:29:44.663231397Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:29:44.663260 containerd[1971]: time="2025-11-08T00:29:44.663244406Z" level=info msg="Start streaming server" Nov 8 00:29:44.663412 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:29:44.668926 containerd[1971]: time="2025-11-08T00:29:44.668411606Z" level=info msg="containerd successfully booted in 0.143790s" Nov 8 00:29:44.925473 tar[1959]: linux-amd64/README.md Nov 8 00:29:44.943185 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:29:44.947239 sshd_keygen[1985]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:29:44.971927 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:29:44.979386 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:29:44.986817 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:29:44.987037 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:29:44.993419 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:29:45.005739 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:29:45.011677 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:29:45.014853 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:29:45.016061 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:29:45.102296 systemd-networkd[1864]: eth0: Gained IPv6LL Nov 8 00:29:45.105664 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:29:45.106845 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:29:45.111424 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 8 00:29:45.119727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:45.123371 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:29:45.190313 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:29:45.197118 amazon-ssm-agent[2163]: Initializing new seelog logger Nov 8 00:29:45.197118 amazon-ssm-agent[2163]: New Seelog Logger Creation Complete Nov 8 00:29:45.197576 amazon-ssm-agent[2163]: 2025/11/08 00:29:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:29:45.197576 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:29:45.197704 amazon-ssm-agent[2163]: 2025/11/08 00:29:45 processing appconfig overrides Nov 8 00:29:45.198724 amazon-ssm-agent[2163]: 2025/11/08 00:29:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:29:45.198724 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:29:45.198724 amazon-ssm-agent[2163]: 2025/11/08 00:29:45 processing appconfig overrides Nov 8 00:29:45.198724 amazon-ssm-agent[2163]: 2025/11/08 00:29:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:29:45.198724 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:29:45.198724 amazon-ssm-agent[2163]: 2025/11/08 00:29:45 processing appconfig overrides Nov 8 00:29:45.198965 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO Proxy environment variables: Nov 8 00:29:45.201637 amazon-ssm-agent[2163]: 2025/11/08 00:29:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:29:45.201637 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 8 00:29:45.201744 amazon-ssm-agent[2163]: 2025/11/08 00:29:45 processing appconfig overrides Nov 8 00:29:45.298421 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO https_proxy: Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO http_proxy: Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO no_proxy: Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO Checking if agent identity type OnPrem can be assumed Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO Checking if agent identity type EC2 can be assumed Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO Agent will take identity from EC2 Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [amazon-ssm-agent] Starting Core Agent Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [Registrar] Starting registrar module Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [EC2Identity] EC2 registration was successful. Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [CredentialRefresher] credentialRefresher has started Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [CredentialRefresher] Starting credentials refresher loop Nov 8 00:29:45.385869 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 8 00:29:45.396095 amazon-ssm-agent[2163]: 2025-11-08 00:29:45 INFO [CredentialRefresher] Next credential rotation will be in 31.583327998616667 minutes Nov 8 00:29:46.399507 amazon-ssm-agent[2163]: 2025-11-08 00:29:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 8 00:29:46.502862 amazon-ssm-agent[2163]: 2025-11-08 00:29:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2182) started Nov 8 00:29:46.601346 amazon-ssm-agent[2163]: 2025-11-08 00:29:46 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 8 00:29:47.090663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:47.092073 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:29:47.093896 systemd[1]: Startup finished in 594ms (kernel) + 6.324s (initrd) + 7.098s (userspace) = 14.017s. Nov 8 00:29:47.098751 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:47.654329 ntpd[1942]: Listen normally on 7 eth0 [fe80::453:4aff:fe3a:3af%2]:123 Nov 8 00:29:47.654650 ntpd[1942]: 8 Nov 00:29:47 ntpd[1942]: Listen normally on 7 eth0 [fe80::453:4aff:fe3a:3af%2]:123 Nov 8 00:29:47.995665 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:29:48.000463 systemd[1]: Started sshd@0-172.31.22.136:22-139.178.89.65:42114.service - OpenSSH per-connection server daemon (139.178.89.65:42114). Nov 8 00:29:48.112172 kubelet[2198]: E1108 00:29:48.112120 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:48.115185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:48.115552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:48.115949 systemd[1]: kubelet.service: Consumed 1.034s CPU time. Nov 8 00:29:48.188899 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 42114 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:29:48.191265 sshd[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:48.200572 systemd-logind[1950]: New session 1 of user core. Nov 8 00:29:48.201729 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:29:48.207602 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:29:48.232958 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:29:48.238442 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:29:48.244962 (systemd)[2214]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:29:48.358159 systemd[2214]: Queued start job for default target default.target. Nov 8 00:29:48.369594 systemd[2214]: Created slice app.slice - User Application Slice. Nov 8 00:29:48.369638 systemd[2214]: Reached target paths.target - Paths. Nov 8 00:29:48.369659 systemd[2214]: Reached target timers.target - Timers. Nov 8 00:29:48.371013 systemd[2214]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:29:48.383328 systemd[2214]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:29:48.383412 systemd[2214]: Reached target sockets.target - Sockets. Nov 8 00:29:48.383433 systemd[2214]: Reached target basic.target - Basic System. Nov 8 00:29:48.383483 systemd[2214]: Reached target default.target - Main User Target. Nov 8 00:29:48.383521 systemd[2214]: Startup finished in 131ms. Nov 8 00:29:48.383963 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:29:48.390336 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:29:48.553329 systemd[1]: Started sshd@1-172.31.22.136:22-139.178.89.65:42118.service - OpenSSH per-connection server daemon (139.178.89.65:42118). Nov 8 00:29:48.712673 sshd[2225]: Accepted publickey for core from 139.178.89.65 port 42118 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:29:48.714301 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:48.719193 systemd-logind[1950]: New session 2 of user core. Nov 8 00:29:48.725286 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:29:48.848629 sshd[2225]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:48.851565 systemd[1]: sshd@1-172.31.22.136:22-139.178.89.65:42118.service: Deactivated successfully. Nov 8 00:29:48.853162 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:29:48.854324 systemd-logind[1950]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:29:48.855292 systemd-logind[1950]: Removed session 2. Nov 8 00:29:48.883311 systemd[1]: Started sshd@2-172.31.22.136:22-139.178.89.65:42120.service - OpenSSH per-connection server daemon (139.178.89.65:42120). Nov 8 00:29:49.049578 sshd[2232]: Accepted publickey for core from 139.178.89.65 port 42120 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:29:49.050879 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:49.055428 systemd-logind[1950]: New session 3 of user core. Nov 8 00:29:49.061286 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:29:49.183371 sshd[2232]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:49.187548 systemd[1]: sshd@2-172.31.22.136:22-139.178.89.65:42120.service: Deactivated successfully. Nov 8 00:29:49.189568 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:29:49.190284 systemd-logind[1950]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:29:49.191329 systemd-logind[1950]: Removed session 3. Nov 8 00:29:49.222456 systemd[1]: Started sshd@3-172.31.22.136:22-139.178.89.65:42124.service - OpenSSH per-connection server daemon (139.178.89.65:42124). Nov 8 00:29:49.379460 sshd[2239]: Accepted publickey for core from 139.178.89.65 port 42124 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:29:49.379961 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:49.384463 systemd-logind[1950]: New session 4 of user core. Nov 8 00:29:49.395447 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:29:49.512908 sshd[2239]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:49.516623 systemd[1]: sshd@3-172.31.22.136:22-139.178.89.65:42124.service: Deactivated successfully. Nov 8 00:29:49.518799 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:29:49.520197 systemd-logind[1950]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:29:49.521636 systemd-logind[1950]: Removed session 4. Nov 8 00:29:49.552557 systemd[1]: Started sshd@4-172.31.22.136:22-139.178.89.65:42140.service - OpenSSH per-connection server daemon (139.178.89.65:42140). Nov 8 00:29:49.709383 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 42140 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:29:49.711032 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:49.716097 systemd-logind[1950]: New session 5 of user core. Nov 8 00:29:49.726309 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:29:49.859729 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:29:49.860025 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:29:49.875733 sudo[2249]: pam_unix(sudo:session): session closed for user root Nov 8 00:29:49.898680 sshd[2246]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:49.902618 systemd[1]: sshd@4-172.31.22.136:22-139.178.89.65:42140.service: Deactivated successfully. Nov 8 00:29:49.904441 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:29:49.905200 systemd-logind[1950]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:29:49.906414 systemd-logind[1950]: Removed session 5. Nov 8 00:29:49.930116 systemd[1]: Started sshd@5-172.31.22.136:22-139.178.89.65:42144.service - OpenSSH per-connection server daemon (139.178.89.65:42144). Nov 8 00:29:50.092259 sshd[2254]: Accepted publickey for core from 139.178.89.65 port 42144 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:29:50.093687 sshd[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:50.104981 systemd-logind[1950]: New session 6 of user core. Nov 8 00:29:50.110295 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:29:50.206658 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:29:50.206938 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:29:50.210290 sudo[2258]: pam_unix(sudo:session): session closed for user root Nov 8 00:29:50.215631 sudo[2257]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:29:50.215918 sudo[2257]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:29:50.229597 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:29:50.234640 auditctl[2261]: No rules Nov 8 00:29:50.235096 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:29:50.235341 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:29:50.238358 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:29:50.272050 augenrules[2279]: No rules Nov 8 00:29:50.273664 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:29:50.275915 sudo[2257]: pam_unix(sudo:session): session closed for user root Nov 8 00:29:50.299178 sshd[2254]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:50.303445 systemd[1]: sshd@5-172.31.22.136:22-139.178.89.65:42144.service: Deactivated successfully. Nov 8 00:29:50.305331 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:29:50.306051 systemd-logind[1950]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:29:50.307122 systemd-logind[1950]: Removed session 6. Nov 8 00:29:50.327739 systemd[1]: Started sshd@6-172.31.22.136:22-139.178.89.65:42156.service - OpenSSH per-connection server daemon (139.178.89.65:42156). Nov 8 00:29:50.491485 sshd[2287]: Accepted publickey for core from 139.178.89.65 port 42156 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:29:50.493024 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:50.498265 systemd-logind[1950]: New session 7 of user core. Nov 8 00:29:50.502278 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:29:50.604008 sudo[2290]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:29:50.604315 sudo[2290]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:29:51.081592 (dockerd)[2305]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:29:51.081658 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:29:51.592829 dockerd[2305]: time="2025-11-08T00:29:51.592760409Z" level=info msg="Starting up" Nov 8 00:29:51.713976 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport141424358-merged.mount: Deactivated successfully. Nov 8 00:29:51.772658 dockerd[2305]: time="2025-11-08T00:29:51.772585309Z" level=info msg="Loading containers: start." Nov 8 00:29:51.959110 kernel: Initializing XFRM netlink socket Nov 8 00:29:52.008062 (udev-worker)[2328]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:29:52.057393 systemd-networkd[1864]: docker0: Link UP Nov 8 00:29:52.086691 dockerd[2305]: time="2025-11-08T00:29:52.086647974Z" level=info msg="Loading containers: done." Nov 8 00:29:52.101154 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1398740605-merged.mount: Deactivated successfully. Nov 8 00:29:52.112100 dockerd[2305]: time="2025-11-08T00:29:52.112040668Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:29:52.112262 dockerd[2305]: time="2025-11-08T00:29:52.112176172Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:29:52.112293 dockerd[2305]: time="2025-11-08T00:29:52.112278533Z" level=info msg="Daemon has completed initialization" Nov 8 00:29:52.175101 dockerd[2305]: time="2025-11-08T00:29:52.174124480Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:29:52.174306 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:29:53.449340 containerd[1971]: time="2025-11-08T00:29:53.449293188Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:29:54.108850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119122069.mount: Deactivated successfully. Nov 8 00:29:56.629982 containerd[1971]: time="2025-11-08T00:29:56.629911001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:56.632409 containerd[1971]: time="2025-11-08T00:29:56.632344652Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 8 00:29:56.636025 containerd[1971]: time="2025-11-08T00:29:56.634424051Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:56.640782 containerd[1971]: time="2025-11-08T00:29:56.640743236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:56.641762 containerd[1971]: time="2025-11-08T00:29:56.641727620Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.192396626s" Nov 8 00:29:56.641818 containerd[1971]: time="2025-11-08T00:29:56.641794490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:29:56.642325 containerd[1971]: time="2025-11-08T00:29:56.642298015Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:29:58.365829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:29:58.371420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:58.740902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:58.751815 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:58.880364 kubelet[2514]: E1108 00:29:58.880284 2514 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:58.884318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:58.884529 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:59.050516 containerd[1971]: time="2025-11-08T00:29:59.049994740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:59.062614 containerd[1971]: time="2025-11-08T00:29:59.062536925Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 8 00:29:59.089473 containerd[1971]: time="2025-11-08T00:29:59.089405082Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:59.126654 containerd[1971]: time="2025-11-08T00:29:59.126571375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:59.127805 containerd[1971]: time="2025-11-08T00:29:59.127688642Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.485357972s" Nov 8 00:29:59.127805 containerd[1971]: time="2025-11-08T00:29:59.127722641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:29:59.128263 containerd[1971]: time="2025-11-08T00:29:59.128233583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:30:00.940010 containerd[1971]: time="2025-11-08T00:30:00.939955112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:00.941983 containerd[1971]: time="2025-11-08T00:30:00.941923873Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 8 00:30:00.944447 containerd[1971]: time="2025-11-08T00:30:00.944419126Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:00.951675 containerd[1971]: time="2025-11-08T00:30:00.951139332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:00.952844 containerd[1971]: time="2025-11-08T00:30:00.952812080Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.824436157s" Nov 8 00:30:00.952996 containerd[1971]: time="2025-11-08T00:30:00.952969059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:30:00.953592 containerd[1971]: time="2025-11-08T00:30:00.953561712Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:30:02.478428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount280249559.mount: Deactivated successfully. Nov 8 00:30:03.078490 containerd[1971]: time="2025-11-08T00:30:03.078428483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:03.080346 containerd[1971]: time="2025-11-08T00:30:03.080214114Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 8 00:30:03.083552 containerd[1971]: time="2025-11-08T00:30:03.082663276Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:03.086335 containerd[1971]: time="2025-11-08T00:30:03.085704826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:03.086335 containerd[1971]: time="2025-11-08T00:30:03.086213695Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.132621159s" Nov 8 00:30:03.086335 containerd[1971]: time="2025-11-08T00:30:03.086242944Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:30:03.087017 containerd[1971]: time="2025-11-08T00:30:03.086981867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:30:03.665190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181084931.mount: Deactivated successfully. Nov 8 00:30:04.700240 containerd[1971]: time="2025-11-08T00:30:04.700184630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:04.702277 containerd[1971]: time="2025-11-08T00:30:04.702225167Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 8 00:30:04.704607 containerd[1971]: time="2025-11-08T00:30:04.704560788Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:04.709718 containerd[1971]: time="2025-11-08T00:30:04.708481984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:04.709718 containerd[1971]: time="2025-11-08T00:30:04.709481700Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.62246194s" Nov 8 00:30:04.709718 containerd[1971]: time="2025-11-08T00:30:04.709510176Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:30:04.710058 containerd[1971]: time="2025-11-08T00:30:04.710030037Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:30:05.207102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331051280.mount: Deactivated successfully. Nov 8 00:30:05.220642 containerd[1971]: time="2025-11-08T00:30:05.220586872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:05.225374 containerd[1971]: time="2025-11-08T00:30:05.225294674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:30:05.227202 containerd[1971]: time="2025-11-08T00:30:05.227146210Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:05.230525 containerd[1971]: time="2025-11-08T00:30:05.230475526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:05.231258 containerd[1971]: time="2025-11-08T00:30:05.231068711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 521.008713ms" Nov 8 00:30:05.231258 containerd[1971]: time="2025-11-08T00:30:05.231119150Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:30:05.231672 containerd[1971]: time="2025-11-08T00:30:05.231651333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:30:05.799001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937354812.mount: Deactivated successfully. Nov 8 00:30:08.445312 containerd[1971]: time="2025-11-08T00:30:08.445226884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:08.449165 containerd[1971]: time="2025-11-08T00:30:08.448837831Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 8 00:30:08.455238 containerd[1971]: time="2025-11-08T00:30:08.455197344Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:08.466377 containerd[1971]: time="2025-11-08T00:30:08.465272022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:08.466377 containerd[1971]: time="2025-11-08T00:30:08.466221127Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.234541878s" Nov 8 00:30:08.466377 containerd[1971]: time="2025-11-08T00:30:08.466263943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:30:08.961493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:30:08.972405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:09.380367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:09.383700 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:09.460926 kubelet[2671]: E1108 00:30:09.460864 2671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:09.464693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:09.465040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:10.996794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:11.005374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:11.038387 systemd[1]: Reloading requested from client PID 2685 ('systemctl') (unit session-7.scope)... Nov 8 00:30:11.038407 systemd[1]: Reloading... Nov 8 00:30:11.174109 zram_generator::config[2726]: No configuration found. Nov 8 00:30:11.323160 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:30:11.410358 systemd[1]: Reloading finished in 371 ms. Nov 8 00:30:11.460848 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:30:11.460963 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:30:11.461349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:11.466485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:11.671229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:11.681566 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:30:11.739013 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:11.739617 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:30:11.739617 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:11.743126 kubelet[2789]: I1108 00:30:11.742915 2789 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:30:12.072338 kubelet[2789]: I1108 00:30:12.072290 2789 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:30:12.072338 kubelet[2789]: I1108 00:30:12.072327 2789 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:30:12.072597 kubelet[2789]: I1108 00:30:12.072588 2789 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:30:12.114952 kubelet[2789]: I1108 00:30:12.114876 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:30:12.125398 kubelet[2789]: E1108 00:30:12.124044 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:12.134624 kubelet[2789]: E1108 00:30:12.134569 2789 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:30:12.134624 kubelet[2789]: I1108 00:30:12.134619 2789 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:30:12.140943 kubelet[2789]: I1108 00:30:12.140713 2789 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:30:12.149773 kubelet[2789]: I1108 00:30:12.149253 2789 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:30:12.149773 kubelet[2789]: I1108 00:30:12.149500 2789 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:30:12.151821 kubelet[2789]: I1108 00:30:12.151792 2789 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:30:12.151821 kubelet[2789]: I1108 00:30:12.151822 2789 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:30:12.153368 kubelet[2789]: I1108 00:30:12.153339 2789 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:12.159922 kubelet[2789]: I1108 00:30:12.159725 2789 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:30:12.159922 kubelet[2789]: I1108 00:30:12.159775 2789 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:30:12.159922 kubelet[2789]: I1108 00:30:12.159800 2789 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:30:12.159922 kubelet[2789]: I1108 00:30:12.159810 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:30:12.167975 kubelet[2789]: W1108 00:30:12.167926 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-136&limit=500&resourceVersion=0": dial tcp 172.31.22.136:6443: connect: connection refused Nov 8 00:30:12.168428 kubelet[2789]: E1108 00:30:12.168377 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-136&limit=500&resourceVersion=0\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:12.168622 kubelet[2789]: I1108 00:30:12.168593 2789 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:30:12.173019 kubelet[2789]: I1108 00:30:12.172454 2789 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:30:12.173019 kubelet[2789]: W1108 00:30:12.172515 2789 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:30:12.173322 kubelet[2789]: W1108 00:30:12.173282 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.136:6443: connect: connection refused Nov 8 00:30:12.173364 kubelet[2789]: E1108 00:30:12.173333 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:12.173624 kubelet[2789]: I1108 00:30:12.173605 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:30:12.173659 kubelet[2789]: I1108 00:30:12.173636 2789 server.go:1287] "Started kubelet" Nov 8 00:30:12.175315 kubelet[2789]: I1108 00:30:12.174599 2789 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:30:12.177241 kubelet[2789]: I1108 00:30:12.176282 2789 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:30:12.179410 kubelet[2789]: I1108 00:30:12.179348 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:30:12.179613 kubelet[2789]: I1108 00:30:12.179596 2789 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:30:12.179995 kubelet[2789]: I1108 00:30:12.179959 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:30:12.186105 kubelet[2789]: E1108 00:30:12.180894 2789 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.136:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.136:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-136.1875e0a135470293 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-136,UID:ip-172-31-22-136,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-136,},FirstTimestamp:2025-11-08 00:30:12.173619859 +0000 UTC m=+0.487463153,LastTimestamp:2025-11-08 00:30:12.173619859 +0000 UTC m=+0.487463153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-136,}" Nov 8 00:30:12.186105 kubelet[2789]: I1108 00:30:12.184596 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:30:12.188729 kubelet[2789]: I1108 00:30:12.187471 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:30:12.188729 kubelet[2789]: E1108 00:30:12.187683 2789 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-136\" not found" Nov 8 00:30:12.190091 kubelet[2789]: I1108 00:30:12.190056 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:30:12.190165 kubelet[2789]: E1108 00:30:12.190051 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-136?timeout=10s\": dial tcp 172.31.22.136:6443: connect: connection refused" interval="200ms" Nov 8 00:30:12.190165 kubelet[2789]: I1108 00:30:12.190152 2789 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:30:12.190409 kubelet[2789]: I1108 00:30:12.190396 2789 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:30:12.190541 kubelet[2789]: I1108 00:30:12.190529 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:30:12.201009 kubelet[2789]: W1108 00:30:12.200970 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.136:6443: connect: connection refused Nov 8 00:30:12.201693 kubelet[2789]: E1108 00:30:12.201661 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:12.202761 kubelet[2789]: I1108 00:30:12.202746 2789 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:30:12.202964 kubelet[2789]: I1108 00:30:12.202927 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:30:12.206195 kubelet[2789]: I1108 00:30:12.206177 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:30:12.208058 kubelet[2789]: I1108 00:30:12.206281 2789 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:30:12.208058 kubelet[2789]: I1108 00:30:12.206301 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:30:12.208058 kubelet[2789]: I1108 00:30:12.206308 2789 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:30:12.208058 kubelet[2789]: E1108 00:30:12.206348 2789 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:30:12.208795 kubelet[2789]: E1108 00:30:12.208773 2789 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:30:12.214314 kubelet[2789]: W1108 00:30:12.214273 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.136:6443: connect: connection refused Nov 8 00:30:12.214449 kubelet[2789]: E1108 00:30:12.214322 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:12.239956 kubelet[2789]: I1108 00:30:12.239919 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:30:12.239956 kubelet[2789]: I1108 00:30:12.239948 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:30:12.240142 kubelet[2789]: I1108 00:30:12.239966 2789 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:12.244577 kubelet[2789]: I1108 00:30:12.244544 2789 policy_none.go:49] "None policy: Start" Nov 8 00:30:12.244577 kubelet[2789]: I1108 00:30:12.244573 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:30:12.244713 kubelet[2789]: I1108 00:30:12.244596 2789 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:30:12.251731 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:30:12.264570 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:30:12.268236 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:30:12.281335 kubelet[2789]: I1108 00:30:12.281117 2789 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:30:12.281476 kubelet[2789]: I1108 00:30:12.281354 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:30:12.281476 kubelet[2789]: I1108 00:30:12.281369 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:30:12.281918 kubelet[2789]: I1108 00:30:12.281785 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:30:12.283662 kubelet[2789]: E1108 00:30:12.283250 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:30:12.283662 kubelet[2789]: E1108 00:30:12.283301 2789 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-136\" not found" Nov 8 00:30:12.319775 systemd[1]: Created slice kubepods-burstable-pod67847bc13f2347b6400b17d3ce5c05ec.slice - libcontainer container kubepods-burstable-pod67847bc13f2347b6400b17d3ce5c05ec.slice. Nov 8 00:30:12.329039 kubelet[2789]: E1108 00:30:12.328931 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:12.332288 systemd[1]: Created slice kubepods-burstable-podcdeddfcadbac48203210084ca3fdf37f.slice - libcontainer container kubepods-burstable-podcdeddfcadbac48203210084ca3fdf37f.slice. Nov 8 00:30:12.340299 kubelet[2789]: E1108 00:30:12.340271 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:12.342785 systemd[1]: Created slice kubepods-burstable-podd5fd43a327dce0f5ac63390d5a093372.slice - libcontainer container kubepods-burstable-podd5fd43a327dce0f5ac63390d5a093372.slice. Nov 8 00:30:12.344926 kubelet[2789]: E1108 00:30:12.344901 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:12.384361 kubelet[2789]: I1108 00:30:12.384069 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-136" Nov 8 00:30:12.384484 kubelet[2789]: E1108 00:30:12.384432 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.136:6443/api/v1/nodes\": dial tcp 172.31.22.136:6443: connect: connection refused" node="ip-172-31-22-136" Nov 8 00:30:12.391463 kubelet[2789]: I1108 00:30:12.391387 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67847bc13f2347b6400b17d3ce5c05ec-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-136\" (UID: \"67847bc13f2347b6400b17d3ce5c05ec\") " pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:12.391463 kubelet[2789]: E1108 00:30:12.391386 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-136?timeout=10s\": dial tcp 172.31.22.136:6443: connect: connection refused" interval="400ms" Nov 8 00:30:12.391463 kubelet[2789]: I1108 00:30:12.391437 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:12.391463 kubelet[2789]: I1108 00:30:12.391465 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:12.391818 kubelet[2789]: I1108 00:30:12.391487 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:12.391818 kubelet[2789]: I1108 00:30:12.391513 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67847bc13f2347b6400b17d3ce5c05ec-ca-certs\") pod \"kube-apiserver-ip-172-31-22-136\" (UID: \"67847bc13f2347b6400b17d3ce5c05ec\") " pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:12.391818 kubelet[2789]: I1108 00:30:12.391538 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67847bc13f2347b6400b17d3ce5c05ec-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-136\" (UID: \"67847bc13f2347b6400b17d3ce5c05ec\") " pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:12.391818 kubelet[2789]: I1108 00:30:12.391563 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5fd43a327dce0f5ac63390d5a093372-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-136\" (UID: \"d5fd43a327dce0f5ac63390d5a093372\") " pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:12.391818 kubelet[2789]: I1108 00:30:12.391584 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:12.391964 kubelet[2789]: I1108 00:30:12.391607 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:12.586229 kubelet[2789]: I1108 00:30:12.585809 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-136" Nov 8 00:30:12.586229 kubelet[2789]: E1108 00:30:12.586170 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.136:6443/api/v1/nodes\": dial tcp 172.31.22.136:6443: connect: connection refused" node="ip-172-31-22-136" Nov 8 00:30:12.630941 containerd[1971]: time="2025-11-08T00:30:12.630882562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-136,Uid:67847bc13f2347b6400b17d3ce5c05ec,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:12.651977 containerd[1971]: time="2025-11-08T00:30:12.651663731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-136,Uid:d5fd43a327dce0f5ac63390d5a093372,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:12.651977 containerd[1971]: time="2025-11-08T00:30:12.651664566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-136,Uid:cdeddfcadbac48203210084ca3fdf37f,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:12.792589 kubelet[2789]: E1108 00:30:12.792522 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-136?timeout=10s\": dial tcp 172.31.22.136:6443: connect: connection refused" interval="800ms" Nov 8 00:30:12.988637 kubelet[2789]: I1108 00:30:12.988504 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-136" Nov 8 00:30:12.989092 kubelet[2789]: E1108 00:30:12.989039 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.136:6443/api/v1/nodes\": dial tcp 172.31.22.136:6443: connect: connection refused" node="ip-172-31-22-136" Nov 8 00:30:13.111744 kubelet[2789]: W1108 00:30:13.111685 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.136:6443: connect: connection refused Nov 8 00:30:13.111877 kubelet[2789]: E1108 00:30:13.111765 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.136:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:13.146632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375389756.mount: Deactivated successfully. Nov 8 00:30:13.159557 containerd[1971]: time="2025-11-08T00:30:13.159513191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:13.161517 containerd[1971]: time="2025-11-08T00:30:13.161481086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:13.163514 containerd[1971]: time="2025-11-08T00:30:13.163467113Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:30:13.165209 containerd[1971]: time="2025-11-08T00:30:13.165166545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:30:13.167169 containerd[1971]: time="2025-11-08T00:30:13.167135464Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:13.169571 containerd[1971]: time="2025-11-08T00:30:13.169536579Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:13.171330 containerd[1971]: time="2025-11-08T00:30:13.171263895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:30:13.173995 containerd[1971]: time="2025-11-08T00:30:13.173953350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:13.176102 containerd[1971]: time="2025-11-08T00:30:13.174654184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.697248ms" Nov 8 00:30:13.176850 containerd[1971]: time="2025-11-08T00:30:13.176813756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 525.067456ms" Nov 8 00:30:13.178618 containerd[1971]: time="2025-11-08T00:30:13.178584378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.762752ms" Nov 8 00:30:13.188274 kubelet[2789]: W1108 00:30:13.188198 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-136&limit=500&resourceVersion=0": dial tcp 172.31.22.136:6443: connect: connection refused Nov 8 00:30:13.188274 kubelet[2789]: E1108 00:30:13.188274 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.136:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-136&limit=500&resourceVersion=0\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:13.374986 containerd[1971]: time="2025-11-08T00:30:13.374696361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:13.374986 containerd[1971]: time="2025-11-08T00:30:13.374755137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:13.374986 containerd[1971]: time="2025-11-08T00:30:13.374783185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:13.374986 containerd[1971]: time="2025-11-08T00:30:13.374875805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:13.391535 containerd[1971]: time="2025-11-08T00:30:13.391390837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:13.391535 containerd[1971]: time="2025-11-08T00:30:13.391465590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:13.392154 containerd[1971]: time="2025-11-08T00:30:13.391927516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:13.392429 containerd[1971]: time="2025-11-08T00:30:13.392056403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:13.394533 containerd[1971]: time="2025-11-08T00:30:13.394429750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:13.399112 containerd[1971]: time="2025-11-08T00:30:13.398353711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:13.399112 containerd[1971]: time="2025-11-08T00:30:13.398387722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:13.399112 containerd[1971]: time="2025-11-08T00:30:13.398507031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:13.416677 systemd[1]: Started cri-containerd-d5265443682d43453e647b49052ec1abda3ecf564f813c9377bc1222266994b0.scope - libcontainer container d5265443682d43453e647b49052ec1abda3ecf564f813c9377bc1222266994b0. Nov 8 00:30:13.451408 systemd[1]: Started cri-containerd-43d0720f751a0a43546b98477f047a044a5377f7e356999a992413b088732e72.scope - libcontainer container 43d0720f751a0a43546b98477f047a044a5377f7e356999a992413b088732e72. Nov 8 00:30:13.453621 systemd[1]: Started cri-containerd-ffd98569774d473b7e7123d2a5a0c3f90ca4e0c687054883a1fb1fa487355033.scope - libcontainer container ffd98569774d473b7e7123d2a5a0c3f90ca4e0c687054883a1fb1fa487355033. Nov 8 00:30:13.548784 containerd[1971]: time="2025-11-08T00:30:13.548712861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-136,Uid:cdeddfcadbac48203210084ca3fdf37f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffd98569774d473b7e7123d2a5a0c3f90ca4e0c687054883a1fb1fa487355033\"" Nov 8 00:30:13.556979 containerd[1971]: time="2025-11-08T00:30:13.556937319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-136,Uid:67847bc13f2347b6400b17d3ce5c05ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5265443682d43453e647b49052ec1abda3ecf564f813c9377bc1222266994b0\"" Nov 8 00:30:13.559972 kubelet[2789]: W1108 00:30:13.559893 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.136:6443: connect: connection refused Nov 8 00:30:13.560156 kubelet[2789]: E1108 00:30:13.559975 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.136:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:13.562505 containerd[1971]: time="2025-11-08T00:30:13.562355416Z" level=info msg="CreateContainer within sandbox \"ffd98569774d473b7e7123d2a5a0c3f90ca4e0c687054883a1fb1fa487355033\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:30:13.563428 containerd[1971]: time="2025-11-08T00:30:13.563398595Z" level=info msg="CreateContainer within sandbox \"d5265443682d43453e647b49052ec1abda3ecf564f813c9377bc1222266994b0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:30:13.563810 containerd[1971]: time="2025-11-08T00:30:13.563708287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-136,Uid:d5fd43a327dce0f5ac63390d5a093372,Namespace:kube-system,Attempt:0,} returns sandbox id \"43d0720f751a0a43546b98477f047a044a5377f7e356999a992413b088732e72\"" Nov 8 00:30:13.566452 containerd[1971]: time="2025-11-08T00:30:13.566410608Z" level=info msg="CreateContainer within sandbox \"43d0720f751a0a43546b98477f047a044a5377f7e356999a992413b088732e72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:30:13.569962 kubelet[2789]: W1108 00:30:13.569915 2789 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.136:6443: connect: connection refused Nov 8 00:30:13.570251 kubelet[2789]: E1108 00:30:13.570195 2789 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.136:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:13.593898 kubelet[2789]: E1108 00:30:13.593821 2789 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-136?timeout=10s\": dial tcp 172.31.22.136:6443: connect: connection refused" interval="1.6s" Nov 8 00:30:13.611352 containerd[1971]: time="2025-11-08T00:30:13.611306603Z" level=info msg="CreateContainer within sandbox \"ffd98569774d473b7e7123d2a5a0c3f90ca4e0c687054883a1fb1fa487355033\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae\"" Nov 8 00:30:13.613880 containerd[1971]: time="2025-11-08T00:30:13.613829032Z" level=info msg="CreateContainer within sandbox \"d5265443682d43453e647b49052ec1abda3ecf564f813c9377bc1222266994b0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b9b4027b665983fd034089d09899e829a174bc1c06234d64292bbbb4091fc124\"" Nov 8 00:30:13.614090 containerd[1971]: time="2025-11-08T00:30:13.614049839Z" level=info msg="StartContainer for \"937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae\"" Nov 8 00:30:13.618247 containerd[1971]: time="2025-11-08T00:30:13.618215959Z" level=info msg="CreateContainer within sandbox \"43d0720f751a0a43546b98477f047a044a5377f7e356999a992413b088732e72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e\"" Nov 8 00:30:13.619380 containerd[1971]: time="2025-11-08T00:30:13.618477946Z" level=info msg="StartContainer for \"b9b4027b665983fd034089d09899e829a174bc1c06234d64292bbbb4091fc124\"" Nov 8 00:30:13.627811 containerd[1971]: time="2025-11-08T00:30:13.627598958Z" level=info msg="StartContainer for \"2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e\"" Nov 8 00:30:13.652654 systemd[1]: Started cri-containerd-b9b4027b665983fd034089d09899e829a174bc1c06234d64292bbbb4091fc124.scope - libcontainer container b9b4027b665983fd034089d09899e829a174bc1c06234d64292bbbb4091fc124. Nov 8 00:30:13.666265 systemd[1]: Started cri-containerd-2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e.scope - libcontainer container 2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e. Nov 8 00:30:13.667404 systemd[1]: Started cri-containerd-937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae.scope - libcontainer container 937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae. Nov 8 00:30:13.720166 containerd[1971]: time="2025-11-08T00:30:13.719118501Z" level=info msg="StartContainer for \"b9b4027b665983fd034089d09899e829a174bc1c06234d64292bbbb4091fc124\" returns successfully" Nov 8 00:30:13.731335 containerd[1971]: time="2025-11-08T00:30:13.731305412Z" level=info msg="StartContainer for \"937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae\" returns successfully" Nov 8 00:30:13.756322 containerd[1971]: time="2025-11-08T00:30:13.756265667Z" level=info msg="StartContainer for \"2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e\" returns successfully" Nov 8 00:30:13.791645 kubelet[2789]: I1108 00:30:13.791615 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-136" Nov 8 00:30:13.793090 kubelet[2789]: E1108 00:30:13.792288 2789 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.136:6443/api/v1/nodes\": dial tcp 172.31.22.136:6443: connect: connection refused" node="ip-172-31-22-136" Nov 8 00:30:14.214087 kubelet[2789]: E1108 00:30:14.214037 2789 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.136:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.136:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:30:14.246657 kubelet[2789]: E1108 00:30:14.246496 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:14.249059 kubelet[2789]: E1108 00:30:14.249034 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:14.250050 kubelet[2789]: E1108 00:30:14.250030 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:14.434389 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:30:15.253620 kubelet[2789]: E1108 00:30:15.253586 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:15.254068 kubelet[2789]: E1108 00:30:15.253996 2789 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:15.394519 kubelet[2789]: I1108 00:30:15.394495 2789 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-136" Nov 8 00:30:15.963238 kubelet[2789]: E1108 00:30:15.963188 2789 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-136\" not found" node="ip-172-31-22-136" Nov 8 00:30:16.107665 kubelet[2789]: I1108 00:30:16.107405 2789 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-136" Nov 8 00:30:16.175766 kubelet[2789]: I1108 00:30:16.175702 2789 apiserver.go:52] "Watching apiserver" Nov 8 00:30:16.188690 kubelet[2789]: I1108 00:30:16.188364 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:16.190658 kubelet[2789]: I1108 00:30:16.190626 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:30:16.193783 kubelet[2789]: E1108 00:30:16.193752 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-136\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:16.193783 kubelet[2789]: I1108 00:30:16.193780 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:16.195354 kubelet[2789]: E1108 00:30:16.195323 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-136\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:16.195354 kubelet[2789]: I1108 00:30:16.195350 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:16.196794 kubelet[2789]: E1108 00:30:16.196765 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-136\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:16.252808 kubelet[2789]: I1108 00:30:16.252305 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:16.254636 kubelet[2789]: E1108 00:30:16.254608 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-136\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:16.746594 kubelet[2789]: I1108 00:30:16.746566 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:16.749352 kubelet[2789]: E1108 00:30:16.748778 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-136\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:18.159882 systemd[1]: Reloading requested from client PID 3063 ('systemctl') (unit session-7.scope)... Nov 8 00:30:18.159912 systemd[1]: Reloading... Nov 8 00:30:18.281183 zram_generator::config[3099]: No configuration found. Nov 8 00:30:18.438311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:30:18.545493 systemd[1]: Reloading finished in 384 ms. Nov 8 00:30:18.589003 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:18.589289 kubelet[2789]: I1108 00:30:18.589250 2789 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:30:18.600197 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:30:18.600406 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:18.604439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:18.879679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:18.890589 (kubelet)[3163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:30:18.962901 kubelet[3163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:18.962901 kubelet[3163]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:30:18.962901 kubelet[3163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:18.964607 kubelet[3163]: I1108 00:30:18.962734 3163 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:30:18.975648 kubelet[3163]: I1108 00:30:18.975613 3163 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:30:18.975648 kubelet[3163]: I1108 00:30:18.975641 3163 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:30:18.976024 kubelet[3163]: I1108 00:30:18.975993 3163 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:30:18.977458 kubelet[3163]: I1108 00:30:18.977432 3163 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:30:18.980055 kubelet[3163]: I1108 00:30:18.980029 3163 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:30:18.984932 kubelet[3163]: E1108 00:30:18.984869 3163 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:30:18.984932 kubelet[3163]: I1108 00:30:18.984925 3163 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:30:18.988797 kubelet[3163]: I1108 00:30:18.988540 3163 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:30:18.989844 kubelet[3163]: I1108 00:30:18.989797 3163 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:30:18.990260 kubelet[3163]: I1108 00:30:18.989947 3163 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:30:18.990260 kubelet[3163]: I1108 00:30:18.990208 3163 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:30:18.990260 kubelet[3163]: I1108 00:30:18.990217 3163 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:30:18.993302 kubelet[3163]: I1108 00:30:18.993262 3163 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:18.993446 kubelet[3163]: I1108 00:30:18.993430 3163 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:30:18.993483 kubelet[3163]: I1108 00:30:18.993453 3163 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:30:18.993483 kubelet[3163]: I1108 00:30:18.993470 3163 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:30:18.993483 kubelet[3163]: I1108 00:30:18.993480 3163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:30:18.995477 kubelet[3163]: I1108 00:30:18.995441 3163 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:30:18.997487 kubelet[3163]: I1108 00:30:18.997438 3163 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:30:19.012513 kubelet[3163]: I1108 00:30:19.010014 3163 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:30:19.012513 kubelet[3163]: I1108 00:30:19.010049 3163 server.go:1287] "Started kubelet" Nov 8 00:30:19.015095 kubelet[3163]: I1108 00:30:19.014943 3163 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:30:19.018454 kubelet[3163]: I1108 00:30:19.018035 3163 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:30:19.019937 kubelet[3163]: I1108 00:30:19.019702 3163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:30:19.020196 kubelet[3163]: I1108 00:30:19.020177 3163 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:30:19.020529 kubelet[3163]: I1108 00:30:19.020400 3163 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:30:19.020529 kubelet[3163]: I1108 00:30:19.020037 3163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:30:19.027554 kubelet[3163]: I1108 00:30:19.027516 3163 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:30:19.027764 kubelet[3163]: I1108 00:30:19.027596 3163 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:30:19.027764 kubelet[3163]: I1108 00:30:19.027703 3163 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:30:19.030464 kubelet[3163]: I1108 00:30:19.030392 3163 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:30:19.030670 kubelet[3163]: I1108 00:30:19.030582 3163 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:30:19.033667 kubelet[3163]: I1108 00:30:19.033637 3163 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:30:19.038792 kubelet[3163]: I1108 00:30:19.037886 3163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:30:19.038978 kubelet[3163]: I1108 00:30:19.038957 3163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:30:19.039045 kubelet[3163]: I1108 00:30:19.038982 3163 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:30:19.039045 kubelet[3163]: I1108 00:30:19.039000 3163 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:30:19.039045 kubelet[3163]: I1108 00:30:19.039007 3163 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:30:19.039162 kubelet[3163]: E1108 00:30:19.039050 3163 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:30:19.092641 kubelet[3163]: I1108 00:30:19.092611 3163 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:30:19.092641 kubelet[3163]: I1108 00:30:19.092627 3163 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:30:19.092641 kubelet[3163]: I1108 00:30:19.092645 3163 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:19.092816 kubelet[3163]: I1108 00:30:19.092808 3163 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:30:19.092843 kubelet[3163]: I1108 00:30:19.092817 3163 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:30:19.092843 kubelet[3163]: I1108 00:30:19.092840 3163 policy_none.go:49] "None policy: Start" Nov 8 00:30:19.092906 kubelet[3163]: I1108 00:30:19.092850 3163 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:30:19.092906 kubelet[3163]: I1108 00:30:19.092861 3163 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:30:19.092976 kubelet[3163]: I1108 00:30:19.092959 3163 state_mem.go:75] "Updated machine memory state" Nov 8 00:30:19.097197 kubelet[3163]: I1108 00:30:19.096947 3163 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:30:19.098693 kubelet[3163]: I1108 00:30:19.097997 3163 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:30:19.098693 kubelet[3163]: I1108 00:30:19.098013 3163 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:30:19.098693 kubelet[3163]: I1108 00:30:19.098418 3163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:30:19.102769 kubelet[3163]: E1108 00:30:19.102743 3163 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:30:19.140855 kubelet[3163]: I1108 00:30:19.140738 3163 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:19.145483 kubelet[3163]: I1108 00:30:19.145460 3163 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:19.145871 kubelet[3163]: I1108 00:30:19.145733 3163 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:19.181758 sudo[3198]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 8 00:30:19.182601 sudo[3198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 8 00:30:19.204782 kubelet[3163]: I1108 00:30:19.204758 3163 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-136" Nov 8 00:30:19.215403 kubelet[3163]: I1108 00:30:19.215365 3163 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-136" Nov 8 00:30:19.215524 kubelet[3163]: I1108 00:30:19.215463 3163 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-136" Nov 8 00:30:19.229310 kubelet[3163]: I1108 00:30:19.229272 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67847bc13f2347b6400b17d3ce5c05ec-ca-certs\") pod \"kube-apiserver-ip-172-31-22-136\" (UID: \"67847bc13f2347b6400b17d3ce5c05ec\") " pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:19.229310 kubelet[3163]: I1108 00:30:19.229315 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:19.230501 kubelet[3163]: I1108 00:30:19.229343 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:19.230501 kubelet[3163]: I1108 00:30:19.229360 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:19.230501 kubelet[3163]: I1108 00:30:19.229376 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:19.230501 kubelet[3163]: I1108 00:30:19.229395 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5fd43a327dce0f5ac63390d5a093372-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-136\" (UID: \"d5fd43a327dce0f5ac63390d5a093372\") " pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:19.230501 kubelet[3163]: I1108 00:30:19.229416 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67847bc13f2347b6400b17d3ce5c05ec-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-136\" (UID: \"67847bc13f2347b6400b17d3ce5c05ec\") " pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:19.230656 kubelet[3163]: I1108 00:30:19.229431 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67847bc13f2347b6400b17d3ce5c05ec-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-136\" (UID: \"67847bc13f2347b6400b17d3ce5c05ec\") " pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:19.230656 kubelet[3163]: I1108 00:30:19.229447 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdeddfcadbac48203210084ca3fdf37f-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-136\" (UID: \"cdeddfcadbac48203210084ca3fdf37f\") " pod="kube-system/kube-controller-manager-ip-172-31-22-136" Nov 8 00:30:19.838134 sudo[3198]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:20.003555 kubelet[3163]: I1108 00:30:20.003510 3163 apiserver.go:52] "Watching apiserver" Nov 8 00:30:20.028153 kubelet[3163]: I1108 00:30:20.028061 3163 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:30:20.069138 kubelet[3163]: I1108 00:30:20.067722 3163 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:20.072000 kubelet[3163]: I1108 00:30:20.071955 3163 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:20.086492 kubelet[3163]: E1108 00:30:20.086328 3163 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-136\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-136" Nov 8 00:30:20.087482 kubelet[3163]: E1108 00:30:20.087377 3163 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-136\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-136" Nov 8 00:30:20.140734 kubelet[3163]: I1108 00:30:20.140585 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-136" podStartSLOduration=1.140563553 podStartE2EDuration="1.140563553s" podCreationTimestamp="2025-11-08 00:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:20.118423318 +0000 UTC m=+1.220214293" watchObservedRunningTime="2025-11-08 00:30:20.140563553 +0000 UTC m=+1.242354537" Nov 8 00:30:20.158584 kubelet[3163]: I1108 00:30:20.158399 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-136" podStartSLOduration=1.158376314 podStartE2EDuration="1.158376314s" podCreationTimestamp="2025-11-08 00:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:20.141770371 +0000 UTC m=+1.243561343" watchObservedRunningTime="2025-11-08 00:30:20.158376314 +0000 UTC m=+1.260167303" Nov 8 00:30:21.462651 sudo[2290]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:21.486271 sshd[2287]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:21.490744 systemd[1]: sshd@6-172.31.22.136:22-139.178.89.65:42156.service: Deactivated successfully. Nov 8 00:30:21.492492 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:30:21.492653 systemd[1]: session-7.scope: Consumed 4.494s CPU time, 142.1M memory peak, 0B memory swap peak. Nov 8 00:30:21.493414 systemd-logind[1950]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:30:21.495330 systemd-logind[1950]: Removed session 7. Nov 8 00:30:22.750102 kubelet[3163]: I1108 00:30:22.750028 3163 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:30:22.753961 containerd[1971]: time="2025-11-08T00:30:22.753915958Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:30:22.754999 kubelet[3163]: I1108 00:30:22.754966 3163 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:30:23.140642 kubelet[3163]: I1108 00:30:23.140275 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-136" podStartSLOduration=4.140254532 podStartE2EDuration="4.140254532s" podCreationTimestamp="2025-11-08 00:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:20.159820044 +0000 UTC m=+1.261611028" watchObservedRunningTime="2025-11-08 00:30:23.140254532 +0000 UTC m=+4.242045494" Nov 8 00:30:23.721932 systemd[1]: Created slice kubepods-besteffort-pod8e12236a_5382_4d91_aa0d_bd9d06183145.slice - libcontainer container kubepods-besteffort-pod8e12236a_5382_4d91_aa0d_bd9d06183145.slice. Nov 8 00:30:23.742041 systemd[1]: Created slice kubepods-burstable-pod25ba1508_1acc_403c_bc11_c7e6e12d17de.slice - libcontainer container kubepods-burstable-pod25ba1508_1acc_403c_bc11_c7e6e12d17de.slice. Nov 8 00:30:23.758731 kubelet[3163]: I1108 00:30:23.758583 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg99t\" (UniqueName: \"kubernetes.io/projected/8e12236a-5382-4d91-aa0d-bd9d06183145-kube-api-access-jg99t\") pod \"kube-proxy-99q4k\" (UID: \"8e12236a-5382-4d91-aa0d-bd9d06183145\") " pod="kube-system/kube-proxy-99q4k" Nov 8 00:30:23.759510 kubelet[3163]: I1108 00:30:23.759261 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-run\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.759510 kubelet[3163]: I1108 00:30:23.759290 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-bpf-maps\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.759510 kubelet[3163]: I1108 00:30:23.759317 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e12236a-5382-4d91-aa0d-bd9d06183145-xtables-lock\") pod \"kube-proxy-99q4k\" (UID: \"8e12236a-5382-4d91-aa0d-bd9d06183145\") " pod="kube-system/kube-proxy-99q4k" Nov 8 00:30:23.759510 kubelet[3163]: I1108 00:30:23.759332 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-etc-cni-netd\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.759510 kubelet[3163]: I1108 00:30:23.759345 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e12236a-5382-4d91-aa0d-bd9d06183145-lib-modules\") pod \"kube-proxy-99q4k\" (UID: \"8e12236a-5382-4d91-aa0d-bd9d06183145\") " pod="kube-system/kube-proxy-99q4k" Nov 8 00:30:23.759510 kubelet[3163]: I1108 00:30:23.759359 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-hostproc\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.761621 kubelet[3163]: I1108 00:30:23.759373 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-cgroup\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.761621 kubelet[3163]: I1108 00:30:23.759387 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25ba1508-1acc-403c-bc11-c7e6e12d17de-clustermesh-secrets\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.761621 kubelet[3163]: I1108 00:30:23.759412 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e12236a-5382-4d91-aa0d-bd9d06183145-kube-proxy\") pod \"kube-proxy-99q4k\" (UID: \"8e12236a-5382-4d91-aa0d-bd9d06183145\") " pod="kube-system/kube-proxy-99q4k" Nov 8 00:30:23.761621 kubelet[3163]: I1108 00:30:23.759431 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-lib-modules\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.761621 kubelet[3163]: I1108 00:30:23.759987 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-config-path\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.761621 kubelet[3163]: I1108 00:30:23.760031 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cni-path\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.761769 kubelet[3163]: I1108 00:30:23.760049 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-xtables-lock\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.819670 systemd[1]: Created slice kubepods-besteffort-podf7c72f0b_0e0d_4ded_97f8_13dc3f8a51aa.slice - libcontainer container kubepods-besteffort-podf7c72f0b_0e0d_4ded_97f8_13dc3f8a51aa.slice. Nov 8 00:30:23.861668 kubelet[3163]: I1108 00:30:23.861209 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-host-proc-sys-net\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.861668 kubelet[3163]: I1108 00:30:23.861271 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rvg5g\" (UID: \"f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa\") " pod="kube-system/cilium-operator-6c4d7847fc-rvg5g" Nov 8 00:30:23.861668 kubelet[3163]: I1108 00:30:23.861301 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-host-proc-sys-kernel\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.861668 kubelet[3163]: I1108 00:30:23.861315 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25ba1508-1acc-403c-bc11-c7e6e12d17de-hubble-tls\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:23.861668 kubelet[3163]: I1108 00:30:23.861332 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqt5k\" (UniqueName: \"kubernetes.io/projected/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa-kube-api-access-cqt5k\") pod \"cilium-operator-6c4d7847fc-rvg5g\" (UID: \"f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa\") " pod="kube-system/cilium-operator-6c4d7847fc-rvg5g" Nov 8 00:30:23.861896 kubelet[3163]: I1108 00:30:23.861348 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngz5p\" (UniqueName: \"kubernetes.io/projected/25ba1508-1acc-403c-bc11-c7e6e12d17de-kube-api-access-ngz5p\") pod \"cilium-pl9fx\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " pod="kube-system/cilium-pl9fx" Nov 8 00:30:24.031828 containerd[1971]: time="2025-11-08T00:30:24.031707495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99q4k,Uid:8e12236a-5382-4d91-aa0d-bd9d06183145,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:24.046576 containerd[1971]: time="2025-11-08T00:30:24.046527241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pl9fx,Uid:25ba1508-1acc-403c-bc11-c7e6e12d17de,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:24.085028 containerd[1971]: time="2025-11-08T00:30:24.083983394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:24.085028 containerd[1971]: time="2025-11-08T00:30:24.084113506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:24.085028 containerd[1971]: time="2025-11-08T00:30:24.084136877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:24.088121 containerd[1971]: time="2025-11-08T00:30:24.087749157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:24.116235 containerd[1971]: time="2025-11-08T00:30:24.115661698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:24.116235 containerd[1971]: time="2025-11-08T00:30:24.115755576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:24.116235 containerd[1971]: time="2025-11-08T00:30:24.115781351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:24.116235 containerd[1971]: time="2025-11-08T00:30:24.115916599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:24.126329 systemd[1]: Started cri-containerd-b52b62ed20d4302de0bf0f917ce9b415bb1d255866229151fa9fb098969550e1.scope - libcontainer container b52b62ed20d4302de0bf0f917ce9b415bb1d255866229151fa9fb098969550e1. Nov 8 00:30:24.127139 containerd[1971]: time="2025-11-08T00:30:24.126706020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rvg5g,Uid:f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:24.147258 systemd[1]: Started cri-containerd-9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830.scope - libcontainer container 9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830. Nov 8 00:30:24.192868 containerd[1971]: time="2025-11-08T00:30:24.192666821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-99q4k,Uid:8e12236a-5382-4d91-aa0d-bd9d06183145,Namespace:kube-system,Attempt:0,} returns sandbox id \"b52b62ed20d4302de0bf0f917ce9b415bb1d255866229151fa9fb098969550e1\"" Nov 8 00:30:24.198782 containerd[1971]: time="2025-11-08T00:30:24.198539199Z" level=info msg="CreateContainer within sandbox \"b52b62ed20d4302de0bf0f917ce9b415bb1d255866229151fa9fb098969550e1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:30:24.203115 containerd[1971]: time="2025-11-08T00:30:24.203050633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pl9fx,Uid:25ba1508-1acc-403c-bc11-c7e6e12d17de,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\"" Nov 8 00:30:24.209315 containerd[1971]: time="2025-11-08T00:30:24.208957126Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 8 00:30:24.220688 containerd[1971]: time="2025-11-08T00:30:24.220044545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:24.220946 containerd[1971]: time="2025-11-08T00:30:24.220833693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:24.220946 containerd[1971]: time="2025-11-08T00:30:24.220860964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:24.221376 containerd[1971]: time="2025-11-08T00:30:24.221164124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:24.241326 systemd[1]: Started cri-containerd-8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1.scope - libcontainer container 8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1. Nov 8 00:30:24.255328 containerd[1971]: time="2025-11-08T00:30:24.255064733Z" level=info msg="CreateContainer within sandbox \"b52b62ed20d4302de0bf0f917ce9b415bb1d255866229151fa9fb098969550e1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"22e630551a14b7d3b42194e94b50baeba033541d45bc6aa5780efeebb12da3a2\"" Nov 8 00:30:24.256168 containerd[1971]: time="2025-11-08T00:30:24.256138493Z" level=info msg="StartContainer for \"22e630551a14b7d3b42194e94b50baeba033541d45bc6aa5780efeebb12da3a2\"" Nov 8 00:30:24.297496 systemd[1]: Started cri-containerd-22e630551a14b7d3b42194e94b50baeba033541d45bc6aa5780efeebb12da3a2.scope - libcontainer container 22e630551a14b7d3b42194e94b50baeba033541d45bc6aa5780efeebb12da3a2. Nov 8 00:30:24.322199 containerd[1971]: time="2025-11-08T00:30:24.322155958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rvg5g,Uid:f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\"" Nov 8 00:30:24.349603 containerd[1971]: time="2025-11-08T00:30:24.349560898Z" level=info msg="StartContainer for \"22e630551a14b7d3b42194e94b50baeba033541d45bc6aa5780efeebb12da3a2\" returns successfully" Nov 8 00:30:26.417101 kubelet[3163]: I1108 00:30:26.417013 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-99q4k" podStartSLOduration=3.416989609 podStartE2EDuration="3.416989609s" podCreationTimestamp="2025-11-08 00:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:25.090518459 +0000 UTC m=+6.192309443" watchObservedRunningTime="2025-11-08 00:30:26.416989609 +0000 UTC m=+7.518780595" Nov 8 00:30:29.061695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2746897483.mount: Deactivated successfully. Nov 8 00:30:29.082052 update_engine[1951]: I20251108 00:30:29.081142 1951 update_attempter.cc:509] Updating boot flags... Nov 8 00:30:29.212211 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3545) Nov 8 00:30:29.535240 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3545) Nov 8 00:30:31.915914 containerd[1971]: time="2025-11-08T00:30:31.915838729Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:31.918063 containerd[1971]: time="2025-11-08T00:30:31.918006020Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Nov 8 00:30:31.920335 containerd[1971]: time="2025-11-08T00:30:31.920010268Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:31.921682 containerd[1971]: time="2025-11-08T00:30:31.921647100Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.712374504s" Nov 8 00:30:31.921774 containerd[1971]: time="2025-11-08T00:30:31.921685533Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 8 00:30:31.924597 containerd[1971]: time="2025-11-08T00:30:31.924175701Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 8 00:30:31.924989 containerd[1971]: time="2025-11-08T00:30:31.924942334Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:30:31.992245 containerd[1971]: time="2025-11-08T00:30:31.992202716Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\"" Nov 8 00:30:31.993262 containerd[1971]: time="2025-11-08T00:30:31.993016063Z" level=info msg="StartContainer for \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\"" Nov 8 00:30:32.092293 systemd[1]: Started cri-containerd-2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878.scope - libcontainer container 2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878. Nov 8 00:30:32.122575 containerd[1971]: time="2025-11-08T00:30:32.122536293Z" level=info msg="StartContainer for \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\" returns successfully" Nov 8 00:30:32.131713 systemd[1]: cri-containerd-2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878.scope: Deactivated successfully. Nov 8 00:30:32.358397 containerd[1971]: time="2025-11-08T00:30:32.349352189Z" level=info msg="shim disconnected" id=2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878 namespace=k8s.io Nov 8 00:30:32.358699 containerd[1971]: time="2025-11-08T00:30:32.358397364Z" level=warning msg="cleaning up after shim disconnected" id=2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878 namespace=k8s.io Nov 8 00:30:32.358699 containerd[1971]: time="2025-11-08T00:30:32.358417120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:32.982962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878-rootfs.mount: Deactivated successfully. Nov 8 00:30:33.113872 containerd[1971]: time="2025-11-08T00:30:33.113833003Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:30:33.131997 containerd[1971]: time="2025-11-08T00:30:33.131876268Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\"" Nov 8 00:30:33.133882 containerd[1971]: time="2025-11-08T00:30:33.133100245Z" level=info msg="StartContainer for \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\"" Nov 8 00:30:33.209804 systemd[1]: Started cri-containerd-da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b.scope - libcontainer container da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b. Nov 8 00:30:33.304598 containerd[1971]: time="2025-11-08T00:30:33.304382932Z" level=info msg="StartContainer for \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\" returns successfully" Nov 8 00:30:33.324523 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:30:33.326959 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:30:33.327072 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:30:33.343112 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:30:33.347686 systemd[1]: cri-containerd-da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b.scope: Deactivated successfully. Nov 8 00:30:33.384056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:30:33.427223 containerd[1971]: time="2025-11-08T00:30:33.427135326Z" level=info msg="shim disconnected" id=da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b namespace=k8s.io Nov 8 00:30:33.427223 containerd[1971]: time="2025-11-08T00:30:33.427203655Z" level=warning msg="cleaning up after shim disconnected" id=da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b namespace=k8s.io Nov 8 00:30:33.427223 containerd[1971]: time="2025-11-08T00:30:33.427217229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:33.442004 containerd[1971]: time="2025-11-08T00:30:33.441943205Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:30:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:30:33.984104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b-rootfs.mount: Deactivated successfully. Nov 8 00:30:34.008626 containerd[1971]: time="2025-11-08T00:30:34.008573267Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:34.010390 containerd[1971]: time="2025-11-08T00:30:34.010337160Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Nov 8 00:30:34.012936 containerd[1971]: time="2025-11-08T00:30:34.012620931Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:34.014265 containerd[1971]: time="2025-11-08T00:30:34.014223386Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.08998835s" Nov 8 00:30:34.014733 containerd[1971]: time="2025-11-08T00:30:34.014271102Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 8 00:30:34.018744 containerd[1971]: time="2025-11-08T00:30:34.018709202Z" level=info msg="CreateContainer within sandbox \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 8 00:30:34.043450 containerd[1971]: time="2025-11-08T00:30:34.043393012Z" level=info msg="CreateContainer within sandbox \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\"" Nov 8 00:30:34.044205 containerd[1971]: time="2025-11-08T00:30:34.044171497Z" level=info msg="StartContainer for \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\"" Nov 8 00:30:34.084331 systemd[1]: Started cri-containerd-300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe.scope - libcontainer container 300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe. Nov 8 00:30:34.117737 containerd[1971]: time="2025-11-08T00:30:34.117583625Z" level=info msg="StartContainer for \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\" returns successfully" Nov 8 00:30:34.127898 containerd[1971]: time="2025-11-08T00:30:34.127855101Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:30:34.201011 containerd[1971]: time="2025-11-08T00:30:34.200952975Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\"" Nov 8 00:30:34.201786 containerd[1971]: time="2025-11-08T00:30:34.201737973Z" level=info msg="StartContainer for \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\"" Nov 8 00:30:34.247308 systemd[1]: Started cri-containerd-4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f.scope - libcontainer container 4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f. Nov 8 00:30:34.297046 containerd[1971]: time="2025-11-08T00:30:34.296447808Z" level=info msg="StartContainer for \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\" returns successfully" Nov 8 00:30:34.307175 systemd[1]: cri-containerd-4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f.scope: Deactivated successfully. Nov 8 00:30:34.376416 containerd[1971]: time="2025-11-08T00:30:34.376305264Z" level=info msg="shim disconnected" id=4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f namespace=k8s.io Nov 8 00:30:34.376416 containerd[1971]: time="2025-11-08T00:30:34.376371787Z" level=warning msg="cleaning up after shim disconnected" id=4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f namespace=k8s.io Nov 8 00:30:34.376416 containerd[1971]: time="2025-11-08T00:30:34.376386240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:34.395616 containerd[1971]: time="2025-11-08T00:30:34.395537955Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:30:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:30:34.986155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168294994.mount: Deactivated successfully. Nov 8 00:30:35.154708 containerd[1971]: time="2025-11-08T00:30:35.154632500Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:30:35.195608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275143754.mount: Deactivated successfully. Nov 8 00:30:35.204503 containerd[1971]: time="2025-11-08T00:30:35.204450308Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\"" Nov 8 00:30:35.208098 containerd[1971]: time="2025-11-08T00:30:35.207329007Z" level=info msg="StartContainer for \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\"" Nov 8 00:30:35.216977 kubelet[3163]: I1108 00:30:35.215919 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rvg5g" podStartSLOduration=2.5232554719999998 podStartE2EDuration="12.215896763s" podCreationTimestamp="2025-11-08 00:30:23 +0000 UTC" firstStartedPulling="2025-11-08 00:30:24.324232993 +0000 UTC m=+5.426023955" lastFinishedPulling="2025-11-08 00:30:34.016874269 +0000 UTC m=+15.118665246" observedRunningTime="2025-11-08 00:30:34.1814374 +0000 UTC m=+15.283228382" watchObservedRunningTime="2025-11-08 00:30:35.215896763 +0000 UTC m=+16.317687746" Nov 8 00:30:35.253338 systemd[1]: Started cri-containerd-2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66.scope - libcontainer container 2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66. Nov 8 00:30:35.317629 systemd[1]: cri-containerd-2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66.scope: Deactivated successfully. Nov 8 00:30:35.323622 containerd[1971]: time="2025-11-08T00:30:35.323584532Z" level=info msg="StartContainer for \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\" returns successfully" Nov 8 00:30:35.367947 containerd[1971]: time="2025-11-08T00:30:35.367891659Z" level=info msg="shim disconnected" id=2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66 namespace=k8s.io Nov 8 00:30:35.367947 containerd[1971]: time="2025-11-08T00:30:35.367938101Z" level=warning msg="cleaning up after shim disconnected" id=2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66 namespace=k8s.io Nov 8 00:30:35.367947 containerd[1971]: time="2025-11-08T00:30:35.367946687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:35.984062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66-rootfs.mount: Deactivated successfully. Nov 8 00:30:36.155617 containerd[1971]: time="2025-11-08T00:30:36.155462421Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:30:36.185383 containerd[1971]: time="2025-11-08T00:30:36.185328139Z" level=info msg="CreateContainer within sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\"" Nov 8 00:30:36.186382 containerd[1971]: time="2025-11-08T00:30:36.186338531Z" level=info msg="StartContainer for \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\"" Nov 8 00:30:36.232301 systemd[1]: Started cri-containerd-30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4.scope - libcontainer container 30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4. Nov 8 00:30:36.267111 containerd[1971]: time="2025-11-08T00:30:36.266996617Z" level=info msg="StartContainer for \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\" returns successfully" Nov 8 00:30:36.533824 kubelet[3163]: I1108 00:30:36.533737 3163 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:30:36.585672 systemd[1]: Created slice kubepods-burstable-pod3b9b165c_1e58_416a_bcd4_961fc9b1793e.slice - libcontainer container kubepods-burstable-pod3b9b165c_1e58_416a_bcd4_961fc9b1793e.slice. Nov 8 00:30:36.594506 systemd[1]: Created slice kubepods-burstable-pod4dbf7501_cc21_4c04_ba0a_4138016e6629.slice - libcontainer container kubepods-burstable-pod4dbf7501_cc21_4c04_ba0a_4138016e6629.slice. Nov 8 00:30:36.657691 kubelet[3163]: I1108 00:30:36.657553 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lb7p\" (UniqueName: \"kubernetes.io/projected/3b9b165c-1e58-416a-bcd4-961fc9b1793e-kube-api-access-2lb7p\") pod \"coredns-668d6bf9bc-6jmd5\" (UID: \"3b9b165c-1e58-416a-bcd4-961fc9b1793e\") " pod="kube-system/coredns-668d6bf9bc-6jmd5" Nov 8 00:30:36.657691 kubelet[3163]: I1108 00:30:36.657596 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dbf7501-cc21-4c04-ba0a-4138016e6629-config-volume\") pod \"coredns-668d6bf9bc-vmf2n\" (UID: \"4dbf7501-cc21-4c04-ba0a-4138016e6629\") " pod="kube-system/coredns-668d6bf9bc-vmf2n" Nov 8 00:30:36.657691 kubelet[3163]: I1108 00:30:36.657617 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b9b165c-1e58-416a-bcd4-961fc9b1793e-config-volume\") pod \"coredns-668d6bf9bc-6jmd5\" (UID: \"3b9b165c-1e58-416a-bcd4-961fc9b1793e\") " pod="kube-system/coredns-668d6bf9bc-6jmd5" Nov 8 00:30:36.657691 kubelet[3163]: I1108 00:30:36.657635 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28tpr\" (UniqueName: \"kubernetes.io/projected/4dbf7501-cc21-4c04-ba0a-4138016e6629-kube-api-access-28tpr\") pod \"coredns-668d6bf9bc-vmf2n\" (UID: \"4dbf7501-cc21-4c04-ba0a-4138016e6629\") " pod="kube-system/coredns-668d6bf9bc-vmf2n" Nov 8 00:30:36.891421 containerd[1971]: time="2025-11-08T00:30:36.891363936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6jmd5,Uid:3b9b165c-1e58-416a-bcd4-961fc9b1793e,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:36.907911 containerd[1971]: time="2025-11-08T00:30:36.907612784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmf2n,Uid:4dbf7501-cc21-4c04-ba0a-4138016e6629,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:37.006201 systemd[1]: run-containerd-runc-k8s.io-30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4-runc.0jZDjH.mount: Deactivated successfully. Nov 8 00:30:38.784501 (udev-worker)[4136]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:30:38.785936 (udev-worker)[4171]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:30:38.786935 systemd-networkd[1864]: cilium_host: Link UP Nov 8 00:30:38.787292 systemd-networkd[1864]: cilium_net: Link UP Nov 8 00:30:38.787433 systemd-networkd[1864]: cilium_net: Gained carrier Nov 8 00:30:38.789841 systemd-networkd[1864]: cilium_host: Gained carrier Nov 8 00:30:38.918319 systemd-networkd[1864]: cilium_host: Gained IPv6LL Nov 8 00:30:38.935215 systemd-networkd[1864]: cilium_net: Gained IPv6LL Nov 8 00:30:38.936922 systemd-networkd[1864]: cilium_vxlan: Link UP Nov 8 00:30:38.936928 systemd-networkd[1864]: cilium_vxlan: Gained carrier Nov 8 00:30:39.404136 kernel: NET: Registered PF_ALG protocol family Nov 8 00:30:40.081163 systemd-networkd[1864]: cilium_vxlan: Gained IPv6LL Nov 8 00:30:40.143897 systemd-networkd[1864]: lxc_health: Link UP Nov 8 00:30:40.155148 (udev-worker)[4190]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:30:40.158178 systemd-networkd[1864]: lxc_health: Gained carrier Nov 8 00:30:40.556647 systemd-networkd[1864]: lxc579e173e2c64: Link UP Nov 8 00:30:40.562106 kernel: eth0: renamed from tmp3244f Nov 8 00:30:40.565527 systemd-networkd[1864]: lxcacbe71b87e90: Link UP Nov 8 00:30:40.574267 systemd-networkd[1864]: lxc579e173e2c64: Gained carrier Nov 8 00:30:40.577283 kernel: eth0: renamed from tmp7f97f Nov 8 00:30:40.584990 systemd-networkd[1864]: lxcacbe71b87e90: Gained carrier Nov 8 00:30:41.230241 systemd-networkd[1864]: lxc_health: Gained IPv6LL Nov 8 00:30:41.742348 systemd-networkd[1864]: lxcacbe71b87e90: Gained IPv6LL Nov 8 00:30:42.072547 kubelet[3163]: I1108 00:30:42.071868 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pl9fx" podStartSLOduration=11.356538094 podStartE2EDuration="19.071843436s" podCreationTimestamp="2025-11-08 00:30:23 +0000 UTC" firstStartedPulling="2025-11-08 00:30:24.207793463 +0000 UTC m=+5.309584425" lastFinishedPulling="2025-11-08 00:30:31.923098792 +0000 UTC m=+13.024889767" observedRunningTime="2025-11-08 00:30:37.191962957 +0000 UTC m=+18.293753940" watchObservedRunningTime="2025-11-08 00:30:42.071843436 +0000 UTC m=+23.173634421" Nov 8 00:30:42.254261 systemd-networkd[1864]: lxc579e173e2c64: Gained IPv6LL Nov 8 00:30:44.654443 ntpd[1942]: Listen normally on 8 cilium_host 192.168.0.169:123 Nov 8 00:30:44.655297 ntpd[1942]: 8 Nov 00:30:44 ntpd[1942]: Listen normally on 8 cilium_host 192.168.0.169:123 Nov 8 00:30:44.655297 ntpd[1942]: 8 Nov 00:30:44 ntpd[1942]: Listen normally on 9 cilium_net [fe80::20f2:7aff:fef9:556a%4]:123 Nov 8 00:30:44.655297 ntpd[1942]: 8 Nov 00:30:44 ntpd[1942]: Listen normally on 10 cilium_host [fe80::843b:7dff:fe63:d87d%5]:123 Nov 8 00:30:44.655297 ntpd[1942]: 8 Nov 00:30:44 ntpd[1942]: Listen normally on 11 cilium_vxlan [fe80::3c7b:e5ff:fec9:4c6d%6]:123 Nov 8 00:30:44.655297 ntpd[1942]: 8 Nov 00:30:44 ntpd[1942]: Listen normally on 12 lxc_health [fe80::44e7:feff:fe36:feab%8]:123 Nov 8 00:30:44.655297 ntpd[1942]: 8 Nov 00:30:44 ntpd[1942]: Listen normally on 13 lxcacbe71b87e90 [fe80::c87d:e6ff:fe11:c7e2%10]:123 Nov 8 00:30:44.655297 ntpd[1942]: 8 Nov 00:30:44 ntpd[1942]: Listen normally on 14 lxc579e173e2c64 [fe80::488d:6aff:fedc:fcf8%12]:123 Nov 8 00:30:44.654537 ntpd[1942]: Listen normally on 9 cilium_net [fe80::20f2:7aff:fef9:556a%4]:123 Nov 8 00:30:44.654594 ntpd[1942]: Listen normally on 10 cilium_host [fe80::843b:7dff:fe63:d87d%5]:123 Nov 8 00:30:44.654632 ntpd[1942]: Listen normally on 11 cilium_vxlan [fe80::3c7b:e5ff:fec9:4c6d%6]:123 Nov 8 00:30:44.654667 ntpd[1942]: Listen normally on 12 lxc_health [fe80::44e7:feff:fe36:feab%8]:123 Nov 8 00:30:44.654703 ntpd[1942]: Listen normally on 13 lxcacbe71b87e90 [fe80::c87d:e6ff:fe11:c7e2%10]:123 Nov 8 00:30:44.654739 ntpd[1942]: Listen normally on 14 lxc579e173e2c64 [fe80::488d:6aff:fedc:fcf8%12]:123 Nov 8 00:30:44.968891 containerd[1971]: time="2025-11-08T00:30:44.968602452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:44.971198 containerd[1971]: time="2025-11-08T00:30:44.969188771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:44.971198 containerd[1971]: time="2025-11-08T00:30:44.969365346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:44.972236 containerd[1971]: time="2025-11-08T00:30:44.971290735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:44.990114 containerd[1971]: time="2025-11-08T00:30:44.986439473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:44.995653 containerd[1971]: time="2025-11-08T00:30:44.992436845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:44.995653 containerd[1971]: time="2025-11-08T00:30:44.992492996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:44.995653 containerd[1971]: time="2025-11-08T00:30:44.992639217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:45.057326 systemd[1]: Started cri-containerd-7f97f0b6db87d37cd4e870f94ab966fb4dcf5a784b5739d7de60858b873f1d81.scope - libcontainer container 7f97f0b6db87d37cd4e870f94ab966fb4dcf5a784b5739d7de60858b873f1d81. Nov 8 00:30:45.079419 systemd[1]: run-containerd-runc-k8s.io-3244fe26bd77990e4e7a1173cc8059e8f75075c8e5cdc95fff1eb570b87d2fbd-runc.57r7Hx.mount: Deactivated successfully. Nov 8 00:30:45.092249 systemd[1]: Started cri-containerd-3244fe26bd77990e4e7a1173cc8059e8f75075c8e5cdc95fff1eb570b87d2fbd.scope - libcontainer container 3244fe26bd77990e4e7a1173cc8059e8f75075c8e5cdc95fff1eb570b87d2fbd. Nov 8 00:30:45.184570 containerd[1971]: time="2025-11-08T00:30:45.182831178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6jmd5,Uid:3b9b165c-1e58-416a-bcd4-961fc9b1793e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f97f0b6db87d37cd4e870f94ab966fb4dcf5a784b5739d7de60858b873f1d81\"" Nov 8 00:30:45.192764 containerd[1971]: time="2025-11-08T00:30:45.192723019Z" level=info msg="CreateContainer within sandbox \"7f97f0b6db87d37cd4e870f94ab966fb4dcf5a784b5739d7de60858b873f1d81\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:30:45.229543 containerd[1971]: time="2025-11-08T00:30:45.228424769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vmf2n,Uid:4dbf7501-cc21-4c04-ba0a-4138016e6629,Namespace:kube-system,Attempt:0,} returns sandbox id \"3244fe26bd77990e4e7a1173cc8059e8f75075c8e5cdc95fff1eb570b87d2fbd\"" Nov 8 00:30:45.233033 containerd[1971]: time="2025-11-08T00:30:45.232998927Z" level=info msg="CreateContainer within sandbox \"3244fe26bd77990e4e7a1173cc8059e8f75075c8e5cdc95fff1eb570b87d2fbd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:30:45.238133 containerd[1971]: time="2025-11-08T00:30:45.238053438Z" level=info msg="CreateContainer within sandbox \"7f97f0b6db87d37cd4e870f94ab966fb4dcf5a784b5739d7de60858b873f1d81\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a83d246bd717fd97c3194659d969b09ead455f9f553bc1bc498b08983e31b5e\"" Nov 8 00:30:45.240430 containerd[1971]: time="2025-11-08T00:30:45.239380224Z" level=info msg="StartContainer for \"0a83d246bd717fd97c3194659d969b09ead455f9f553bc1bc498b08983e31b5e\"" Nov 8 00:30:45.258271 containerd[1971]: time="2025-11-08T00:30:45.258225747Z" level=info msg="CreateContainer within sandbox \"3244fe26bd77990e4e7a1173cc8059e8f75075c8e5cdc95fff1eb570b87d2fbd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2aa5d3df6c7c56b98b643afda6265d5610e563b3c58f286de6ac27a52ce61270\"" Nov 8 00:30:45.259280 containerd[1971]: time="2025-11-08T00:30:45.259245405Z" level=info msg="StartContainer for \"2aa5d3df6c7c56b98b643afda6265d5610e563b3c58f286de6ac27a52ce61270\"" Nov 8 00:30:45.277683 systemd[1]: Started cri-containerd-0a83d246bd717fd97c3194659d969b09ead455f9f553bc1bc498b08983e31b5e.scope - libcontainer container 0a83d246bd717fd97c3194659d969b09ead455f9f553bc1bc498b08983e31b5e. Nov 8 00:30:45.298291 systemd[1]: Started cri-containerd-2aa5d3df6c7c56b98b643afda6265d5610e563b3c58f286de6ac27a52ce61270.scope - libcontainer container 2aa5d3df6c7c56b98b643afda6265d5610e563b3c58f286de6ac27a52ce61270. Nov 8 00:30:45.334181 containerd[1971]: time="2025-11-08T00:30:45.334105372Z" level=info msg="StartContainer for \"0a83d246bd717fd97c3194659d969b09ead455f9f553bc1bc498b08983e31b5e\" returns successfully" Nov 8 00:30:45.346514 containerd[1971]: time="2025-11-08T00:30:45.346482275Z" level=info msg="StartContainer for \"2aa5d3df6c7c56b98b643afda6265d5610e563b3c58f286de6ac27a52ce61270\" returns successfully" Nov 8 00:30:46.219605 kubelet[3163]: I1108 00:30:46.219186 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6jmd5" podStartSLOduration=23.219164648 podStartE2EDuration="23.219164648s" podCreationTimestamp="2025-11-08 00:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:46.218643775 +0000 UTC m=+27.320434762" watchObservedRunningTime="2025-11-08 00:30:46.219164648 +0000 UTC m=+27.320955634" Nov 8 00:30:46.255487 kubelet[3163]: I1108 00:30:46.255422 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vmf2n" podStartSLOduration=23.255401322 podStartE2EDuration="23.255401322s" podCreationTimestamp="2025-11-08 00:30:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:46.254309938 +0000 UTC m=+27.356100922" watchObservedRunningTime="2025-11-08 00:30:46.255401322 +0000 UTC m=+27.357192305" Nov 8 00:30:50.118104 systemd[1]: Started sshd@7-172.31.22.136:22-139.178.89.65:34026.service - OpenSSH per-connection server daemon (139.178.89.65:34026). Nov 8 00:30:50.315039 sshd[4719]: Accepted publickey for core from 139.178.89.65 port 34026 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:30:50.316015 sshd[4719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:50.321136 systemd-logind[1950]: New session 8 of user core. Nov 8 00:30:50.328326 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:30:51.105234 sshd[4719]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:51.108392 systemd[1]: sshd@7-172.31.22.136:22-139.178.89.65:34026.service: Deactivated successfully. Nov 8 00:30:51.110369 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:30:51.112019 systemd-logind[1950]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:30:51.113183 systemd-logind[1950]: Removed session 8. Nov 8 00:30:56.144020 systemd[1]: Started sshd@8-172.31.22.136:22-139.178.89.65:54058.service - OpenSSH per-connection server daemon (139.178.89.65:54058). Nov 8 00:30:56.321402 sshd[4736]: Accepted publickey for core from 139.178.89.65 port 54058 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:30:56.322888 sshd[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:56.327948 systemd-logind[1950]: New session 9 of user core. Nov 8 00:30:56.335318 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:30:56.546824 sshd[4736]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:56.550308 systemd[1]: sshd@8-172.31.22.136:22-139.178.89.65:54058.service: Deactivated successfully. Nov 8 00:30:56.551890 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:30:56.552857 systemd-logind[1950]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:30:56.554213 systemd-logind[1950]: Removed session 9. Nov 8 00:31:01.589498 systemd[1]: Started sshd@9-172.31.22.136:22-139.178.89.65:54074.service - OpenSSH per-connection server daemon (139.178.89.65:54074). Nov 8 00:31:01.765693 sshd[4750]: Accepted publickey for core from 139.178.89.65 port 54074 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:01.771600 sshd[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:01.802259 systemd-logind[1950]: New session 10 of user core. Nov 8 00:31:01.813330 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:31:02.065739 sshd[4750]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:02.074759 systemd[1]: sshd@9-172.31.22.136:22-139.178.89.65:54074.service: Deactivated successfully. Nov 8 00:31:02.077279 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:31:02.078472 systemd-logind[1950]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:31:02.079617 systemd-logind[1950]: Removed session 10. Nov 8 00:31:07.101153 systemd[1]: Started sshd@10-172.31.22.136:22-139.178.89.65:59310.service - OpenSSH per-connection server daemon (139.178.89.65:59310). Nov 8 00:31:07.288289 sshd[4764]: Accepted publickey for core from 139.178.89.65 port 59310 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:07.289786 sshd[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:07.296006 systemd-logind[1950]: New session 11 of user core. Nov 8 00:31:07.301279 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:31:07.497468 sshd[4764]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:07.501379 systemd[1]: sshd@10-172.31.22.136:22-139.178.89.65:59310.service: Deactivated successfully. Nov 8 00:31:07.503872 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:31:07.505665 systemd-logind[1950]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:31:07.507440 systemd-logind[1950]: Removed session 11. Nov 8 00:31:07.532133 systemd[1]: Started sshd@11-172.31.22.136:22-139.178.89.65:59322.service - OpenSSH per-connection server daemon (139.178.89.65:59322). Nov 8 00:31:07.699712 sshd[4778]: Accepted publickey for core from 139.178.89.65 port 59322 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:07.701186 sshd[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:07.706140 systemd-logind[1950]: New session 12 of user core. Nov 8 00:31:07.713297 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:31:07.980227 sshd[4778]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:07.986254 systemd-logind[1950]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:31:07.989679 systemd[1]: sshd@11-172.31.22.136:22-139.178.89.65:59322.service: Deactivated successfully. Nov 8 00:31:07.992370 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:31:07.993646 systemd-logind[1950]: Removed session 12. Nov 8 00:31:08.013255 systemd[1]: Started sshd@12-172.31.22.136:22-139.178.89.65:59336.service - OpenSSH per-connection server daemon (139.178.89.65:59336). Nov 8 00:31:08.177836 sshd[4789]: Accepted publickey for core from 139.178.89.65 port 59336 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:08.179480 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:08.184359 systemd-logind[1950]: New session 13 of user core. Nov 8 00:31:08.186444 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:31:08.388740 sshd[4789]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:08.392304 systemd[1]: sshd@12-172.31.22.136:22-139.178.89.65:59336.service: Deactivated successfully. Nov 8 00:31:08.394576 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:31:08.396337 systemd-logind[1950]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:31:08.397989 systemd-logind[1950]: Removed session 13. Nov 8 00:31:13.425184 systemd[1]: Started sshd@13-172.31.22.136:22-139.178.89.65:59346.service - OpenSSH per-connection server daemon (139.178.89.65:59346). Nov 8 00:31:13.596498 sshd[4803]: Accepted publickey for core from 139.178.89.65 port 59346 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:13.596089 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:13.602138 systemd-logind[1950]: New session 14 of user core. Nov 8 00:31:13.607397 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:31:13.796627 sshd[4803]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:13.800130 systemd[1]: sshd@13-172.31.22.136:22-139.178.89.65:59346.service: Deactivated successfully. Nov 8 00:31:13.801901 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:31:13.802683 systemd-logind[1950]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:31:13.803528 systemd-logind[1950]: Removed session 14. Nov 8 00:31:18.834487 systemd[1]: Started sshd@14-172.31.22.136:22-139.178.89.65:35268.service - OpenSSH per-connection server daemon (139.178.89.65:35268). Nov 8 00:31:19.006399 sshd[4816]: Accepted publickey for core from 139.178.89.65 port 35268 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:19.007882 sshd[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:19.014159 systemd-logind[1950]: New session 15 of user core. Nov 8 00:31:19.023363 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:31:19.207835 sshd[4816]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:19.213464 systemd[1]: sshd@14-172.31.22.136:22-139.178.89.65:35268.service: Deactivated successfully. Nov 8 00:31:19.215581 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:31:19.216920 systemd-logind[1950]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:31:19.218303 systemd-logind[1950]: Removed session 15. Nov 8 00:31:19.247468 systemd[1]: Started sshd@15-172.31.22.136:22-139.178.89.65:35274.service - OpenSSH per-connection server daemon (139.178.89.65:35274). Nov 8 00:31:19.413532 sshd[4831]: Accepted publickey for core from 139.178.89.65 port 35274 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:19.415102 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:19.419965 systemd-logind[1950]: New session 16 of user core. Nov 8 00:31:19.431357 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:31:20.053328 sshd[4831]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:20.061247 systemd[1]: sshd@15-172.31.22.136:22-139.178.89.65:35274.service: Deactivated successfully. Nov 8 00:31:20.063661 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:31:20.066066 systemd-logind[1950]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:31:20.067423 systemd-logind[1950]: Removed session 16. Nov 8 00:31:20.091901 systemd[1]: Started sshd@16-172.31.22.136:22-139.178.89.65:35280.service - OpenSSH per-connection server daemon (139.178.89.65:35280). Nov 8 00:31:20.291412 sshd[4842]: Accepted publickey for core from 139.178.89.65 port 35280 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:20.293137 sshd[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:20.299464 systemd-logind[1950]: New session 17 of user core. Nov 8 00:31:20.304322 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:31:21.154033 sshd[4842]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:21.159490 systemd-logind[1950]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:31:21.162504 systemd[1]: sshd@16-172.31.22.136:22-139.178.89.65:35280.service: Deactivated successfully. Nov 8 00:31:21.165630 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:31:21.167165 systemd-logind[1950]: Removed session 17. Nov 8 00:31:21.192559 systemd[1]: Started sshd@17-172.31.22.136:22-139.178.89.65:35286.service - OpenSSH per-connection server daemon (139.178.89.65:35286). Nov 8 00:31:21.375414 sshd[4859]: Accepted publickey for core from 139.178.89.65 port 35286 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:21.377506 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:21.383122 systemd-logind[1950]: New session 18 of user core. Nov 8 00:31:21.388403 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:31:21.756252 sshd[4859]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:21.759971 systemd[1]: sshd@17-172.31.22.136:22-139.178.89.65:35286.service: Deactivated successfully. Nov 8 00:31:21.762780 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:31:21.764634 systemd-logind[1950]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:31:21.765922 systemd-logind[1950]: Removed session 18. Nov 8 00:31:21.786302 systemd[1]: Started sshd@18-172.31.22.136:22-139.178.89.65:35292.service - OpenSSH per-connection server daemon (139.178.89.65:35292). Nov 8 00:31:21.975787 sshd[4869]: Accepted publickey for core from 139.178.89.65 port 35292 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:21.977312 sshd[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:21.983592 systemd-logind[1950]: New session 19 of user core. Nov 8 00:31:21.990354 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:31:22.184518 sshd[4869]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:22.187653 systemd[1]: sshd@18-172.31.22.136:22-139.178.89.65:35292.service: Deactivated successfully. Nov 8 00:31:22.190212 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:31:22.192144 systemd-logind[1950]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:31:22.193882 systemd-logind[1950]: Removed session 19. Nov 8 00:31:27.223529 systemd[1]: Started sshd@19-172.31.22.136:22-139.178.89.65:56996.service - OpenSSH per-connection server daemon (139.178.89.65:56996). Nov 8 00:31:27.385890 sshd[4886]: Accepted publickey for core from 139.178.89.65 port 56996 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:27.386662 sshd[4886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:27.404977 systemd-logind[1950]: New session 20 of user core. Nov 8 00:31:27.415327 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:31:27.595851 sshd[4886]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:27.600476 systemd[1]: sshd@19-172.31.22.136:22-139.178.89.65:56996.service: Deactivated successfully. Nov 8 00:31:27.602216 systemd-logind[1950]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:31:27.603734 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:31:27.604964 systemd-logind[1950]: Removed session 20. Nov 8 00:31:32.635513 systemd[1]: Started sshd@20-172.31.22.136:22-139.178.89.65:57008.service - OpenSSH per-connection server daemon (139.178.89.65:57008). Nov 8 00:31:32.795722 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 57008 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:32.797272 sshd[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:32.802150 systemd-logind[1950]: New session 21 of user core. Nov 8 00:31:32.808301 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:31:32.986418 sshd[4899]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:32.990495 systemd[1]: sshd@20-172.31.22.136:22-139.178.89.65:57008.service: Deactivated successfully. Nov 8 00:31:32.992695 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:31:32.994045 systemd-logind[1950]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:31:32.995406 systemd-logind[1950]: Removed session 21. Nov 8 00:31:38.021049 systemd[1]: Started sshd@21-172.31.22.136:22-139.178.89.65:51906.service - OpenSSH per-connection server daemon (139.178.89.65:51906). Nov 8 00:31:38.184561 sshd[4912]: Accepted publickey for core from 139.178.89.65 port 51906 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:38.186160 sshd[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:38.191341 systemd-logind[1950]: New session 22 of user core. Nov 8 00:31:38.194279 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:31:38.384271 sshd[4912]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:38.388115 systemd[1]: sshd@21-172.31.22.136:22-139.178.89.65:51906.service: Deactivated successfully. Nov 8 00:31:38.390361 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:31:38.391959 systemd-logind[1950]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:31:38.393533 systemd-logind[1950]: Removed session 22. Nov 8 00:31:43.428562 systemd[1]: Started sshd@22-172.31.22.136:22-139.178.89.65:51914.service - OpenSSH per-connection server daemon (139.178.89.65:51914). Nov 8 00:31:43.587261 sshd[4925]: Accepted publickey for core from 139.178.89.65 port 51914 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:43.588834 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:43.593838 systemd-logind[1950]: New session 23 of user core. Nov 8 00:31:43.598746 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:31:43.781708 sshd[4925]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:43.785114 systemd[1]: sshd@22-172.31.22.136:22-139.178.89.65:51914.service: Deactivated successfully. Nov 8 00:31:43.787341 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:31:43.789340 systemd-logind[1950]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:31:43.793111 systemd-logind[1950]: Removed session 23. Nov 8 00:31:43.819264 systemd[1]: Started sshd@23-172.31.22.136:22-139.178.89.65:51920.service - OpenSSH per-connection server daemon (139.178.89.65:51920). Nov 8 00:31:43.995444 sshd[4938]: Accepted publickey for core from 139.178.89.65 port 51920 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:43.997174 sshd[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:44.002261 systemd-logind[1950]: New session 24 of user core. Nov 8 00:31:44.005317 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:31:45.464610 systemd[1]: run-containerd-runc-k8s.io-30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4-runc.TVoLuL.mount: Deactivated successfully. Nov 8 00:31:45.492659 containerd[1971]: time="2025-11-08T00:31:45.492577950Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:31:45.513281 containerd[1971]: time="2025-11-08T00:31:45.513215478Z" level=info msg="StopContainer for \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\" with timeout 2 (s)" Nov 8 00:31:45.514279 containerd[1971]: time="2025-11-08T00:31:45.514188802Z" level=info msg="StopContainer for \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\" with timeout 30 (s)" Nov 8 00:31:45.515746 containerd[1971]: time="2025-11-08T00:31:45.515718849Z" level=info msg="Stop container \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\" with signal terminated" Nov 8 00:31:45.515928 containerd[1971]: time="2025-11-08T00:31:45.515725782Z" level=info msg="Stop container \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\" with signal terminated" Nov 8 00:31:45.527050 systemd-networkd[1864]: lxc_health: Link DOWN Nov 8 00:31:45.527058 systemd-networkd[1864]: lxc_health: Lost carrier Nov 8 00:31:45.534838 systemd[1]: cri-containerd-300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe.scope: Deactivated successfully. Nov 8 00:31:45.558468 systemd[1]: cri-containerd-30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4.scope: Deactivated successfully. Nov 8 00:31:45.559308 systemd[1]: cri-containerd-30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4.scope: Consumed 8.031s CPU time. Nov 8 00:31:45.590350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe-rootfs.mount: Deactivated successfully. Nov 8 00:31:45.602395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4-rootfs.mount: Deactivated successfully. Nov 8 00:31:45.613604 containerd[1971]: time="2025-11-08T00:31:45.613312587Z" level=info msg="shim disconnected" id=30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4 namespace=k8s.io Nov 8 00:31:45.613604 containerd[1971]: time="2025-11-08T00:31:45.613442175Z" level=warning msg="cleaning up after shim disconnected" id=30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4 namespace=k8s.io Nov 8 00:31:45.613604 containerd[1971]: time="2025-11-08T00:31:45.613454619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:45.613604 containerd[1971]: time="2025-11-08T00:31:45.613517509Z" level=info msg="shim disconnected" id=300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe namespace=k8s.io Nov 8 00:31:45.613604 containerd[1971]: time="2025-11-08T00:31:45.613555028Z" level=warning msg="cleaning up after shim disconnected" id=300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe namespace=k8s.io Nov 8 00:31:45.613604 containerd[1971]: time="2025-11-08T00:31:45.613562893Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:45.636132 containerd[1971]: time="2025-11-08T00:31:45.636064390Z" level=info msg="StopContainer for \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\" returns successfully" Nov 8 00:31:45.637891 containerd[1971]: time="2025-11-08T00:31:45.637740821Z" level=info msg="StopContainer for \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\" returns successfully" Nov 8 00:31:45.643025 containerd[1971]: time="2025-11-08T00:31:45.642819539Z" level=info msg="StopPodSandbox for \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\"" Nov 8 00:31:45.643025 containerd[1971]: time="2025-11-08T00:31:45.642870072Z" level=info msg="Container to stop \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:31:45.643025 containerd[1971]: time="2025-11-08T00:31:45.642881886Z" level=info msg="Container to stop \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:31:45.643025 containerd[1971]: time="2025-11-08T00:31:45.642892702Z" level=info msg="Container to stop \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:31:45.643025 containerd[1971]: time="2025-11-08T00:31:45.642902410Z" level=info msg="Container to stop \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:31:45.643025 containerd[1971]: time="2025-11-08T00:31:45.642912181Z" level=info msg="Container to stop \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:31:45.645801 containerd[1971]: time="2025-11-08T00:31:45.644471777Z" level=info msg="StopPodSandbox for \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\"" Nov 8 00:31:45.645801 containerd[1971]: time="2025-11-08T00:31:45.644511986Z" level=info msg="Container to stop \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:31:45.645242 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830-shm.mount: Deactivated successfully. Nov 8 00:31:45.654351 systemd[1]: cri-containerd-8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1.scope: Deactivated successfully. Nov 8 00:31:45.657157 systemd[1]: cri-containerd-9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830.scope: Deactivated successfully. Nov 8 00:31:45.697293 containerd[1971]: time="2025-11-08T00:31:45.697228027Z" level=info msg="shim disconnected" id=8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1 namespace=k8s.io Nov 8 00:31:45.698067 containerd[1971]: time="2025-11-08T00:31:45.697240419Z" level=info msg="shim disconnected" id=9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830 namespace=k8s.io Nov 8 00:31:45.698067 containerd[1971]: time="2025-11-08T00:31:45.698064500Z" level=warning msg="cleaning up after shim disconnected" id=9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830 namespace=k8s.io Nov 8 00:31:45.698067 containerd[1971]: time="2025-11-08T00:31:45.698073519Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:45.700209 containerd[1971]: time="2025-11-08T00:31:45.698462143Z" level=warning msg="cleaning up after shim disconnected" id=8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1 namespace=k8s.io Nov 8 00:31:45.700209 containerd[1971]: time="2025-11-08T00:31:45.698476811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:45.724111 containerd[1971]: time="2025-11-08T00:31:45.723972531Z" level=info msg="TearDown network for sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" successfully" Nov 8 00:31:45.725157 containerd[1971]: time="2025-11-08T00:31:45.724770670Z" level=info msg="TearDown network for sandbox \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\" successfully" Nov 8 00:31:45.725157 containerd[1971]: time="2025-11-08T00:31:45.725150073Z" level=info msg="StopPodSandbox for \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\" returns successfully" Nov 8 00:31:45.725295 containerd[1971]: time="2025-11-08T00:31:45.725251891Z" level=info msg="StopPodSandbox for \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" returns successfully" Nov 8 00:31:45.901767 kubelet[3163]: I1108 00:31:45.901716 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-lib-modules\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.901767 kubelet[3163]: I1108 00:31:45.901776 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqt5k\" (UniqueName: \"kubernetes.io/projected/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa-kube-api-access-cqt5k\") pod \"f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa\" (UID: \"f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa\") " Nov 8 00:31:45.902302 kubelet[3163]: I1108 00:31:45.901804 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-hostproc\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902302 kubelet[3163]: I1108 00:31:45.901819 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-etc-cni-netd\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902302 kubelet[3163]: I1108 00:31:45.901834 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-xtables-lock\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902302 kubelet[3163]: I1108 00:31:45.901852 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-config-path\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902302 kubelet[3163]: I1108 00:31:45.901866 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-run\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902302 kubelet[3163]: I1108 00:31:45.901879 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-bpf-maps\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902470 kubelet[3163]: I1108 00:31:45.901893 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-host-proc-sys-kernel\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902470 kubelet[3163]: I1108 00:31:45.901910 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25ba1508-1acc-403c-bc11-c7e6e12d17de-hubble-tls\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902470 kubelet[3163]: I1108 00:31:45.901925 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngz5p\" (UniqueName: \"kubernetes.io/projected/25ba1508-1acc-403c-bc11-c7e6e12d17de-kube-api-access-ngz5p\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902470 kubelet[3163]: I1108 00:31:45.901946 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25ba1508-1acc-403c-bc11-c7e6e12d17de-clustermesh-secrets\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902470 kubelet[3163]: I1108 00:31:45.901964 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-cgroup\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902470 kubelet[3163]: I1108 00:31:45.901982 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa-cilium-config-path\") pod \"f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa\" (UID: \"f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa\") " Nov 8 00:31:45.902625 kubelet[3163]: I1108 00:31:45.901997 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cni-path\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.902625 kubelet[3163]: I1108 00:31:45.902012 3163 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-host-proc-sys-net\") pod \"25ba1508-1acc-403c-bc11-c7e6e12d17de\" (UID: \"25ba1508-1acc-403c-bc11-c7e6e12d17de\") " Nov 8 00:31:45.908525 kubelet[3163]: I1108 00:31:45.907789 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.908525 kubelet[3163]: I1108 00:31:45.907895 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.908525 kubelet[3163]: I1108 00:31:45.908183 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.908525 kubelet[3163]: I1108 00:31:45.908243 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.911592 kubelet[3163]: I1108 00:31:45.911539 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa-kube-api-access-cqt5k" (OuterVolumeSpecName: "kube-api-access-cqt5k") pod "f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa" (UID: "f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa"). InnerVolumeSpecName "kube-api-access-cqt5k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:31:45.911730 kubelet[3163]: I1108 00:31:45.911608 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-hostproc" (OuterVolumeSpecName: "hostproc") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.911730 kubelet[3163]: I1108 00:31:45.911626 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.911730 kubelet[3163]: I1108 00:31:45.911639 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.912446 kubelet[3163]: I1108 00:31:45.912198 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ba1508-1acc-403c-bc11-c7e6e12d17de-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:31:45.913678 kubelet[3163]: I1108 00:31:45.913647 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:31:45.913759 kubelet[3163]: I1108 00:31:45.913698 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.913759 kubelet[3163]: I1108 00:31:45.913718 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.916543 kubelet[3163]: I1108 00:31:45.916510 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ba1508-1acc-403c-bc11-c7e6e12d17de-kube-api-access-ngz5p" (OuterVolumeSpecName: "kube-api-access-ngz5p") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "kube-api-access-ngz5p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:31:45.917276 kubelet[3163]: I1108 00:31:45.917208 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ba1508-1acc-403c-bc11-c7e6e12d17de-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:31:45.917276 kubelet[3163]: I1108 00:31:45.917256 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cni-path" (OuterVolumeSpecName: "cni-path") pod "25ba1508-1acc-403c-bc11-c7e6e12d17de" (UID: "25ba1508-1acc-403c-bc11-c7e6e12d17de"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:31:45.918792 kubelet[3163]: I1108 00:31:45.918745 3163 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa" (UID: "f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:31:46.003178 kubelet[3163]: I1108 00:31:46.003011 3163 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-host-proc-sys-kernel\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003178 kubelet[3163]: I1108 00:31:46.003057 3163 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25ba1508-1acc-403c-bc11-c7e6e12d17de-hubble-tls\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003178 kubelet[3163]: I1108 00:31:46.003067 3163 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngz5p\" (UniqueName: \"kubernetes.io/projected/25ba1508-1acc-403c-bc11-c7e6e12d17de-kube-api-access-ngz5p\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003178 kubelet[3163]: I1108 00:31:46.003076 3163 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-run\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003178 kubelet[3163]: I1108 00:31:46.003109 3163 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-bpf-maps\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003178 kubelet[3163]: I1108 00:31:46.003118 3163 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25ba1508-1acc-403c-bc11-c7e6e12d17de-clustermesh-secrets\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003178 kubelet[3163]: I1108 00:31:46.003130 3163 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-cgroup\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003178 kubelet[3163]: I1108 00:31:46.003139 3163 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa-cilium-config-path\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003497 kubelet[3163]: I1108 00:31:46.003146 3163 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-cni-path\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003497 kubelet[3163]: I1108 00:31:46.003153 3163 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-host-proc-sys-net\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003497 kubelet[3163]: I1108 00:31:46.003161 3163 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cqt5k\" (UniqueName: \"kubernetes.io/projected/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa-kube-api-access-cqt5k\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003497 kubelet[3163]: I1108 00:31:46.003169 3163 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-lib-modules\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003497 kubelet[3163]: I1108 00:31:46.003177 3163 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-hostproc\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003497 kubelet[3163]: I1108 00:31:46.003184 3163 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-xtables-lock\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003497 kubelet[3163]: I1108 00:31:46.003191 3163 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25ba1508-1acc-403c-bc11-c7e6e12d17de-cilium-config-path\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.003497 kubelet[3163]: I1108 00:31:46.003205 3163 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25ba1508-1acc-403c-bc11-c7e6e12d17de-etc-cni-netd\") on node \"ip-172-31-22-136\" DevicePath \"\"" Nov 8 00:31:46.333955 kubelet[3163]: I1108 00:31:46.333155 3163 scope.go:117] "RemoveContainer" containerID="300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe" Nov 8 00:31:46.335030 containerd[1971]: time="2025-11-08T00:31:46.334874338Z" level=info msg="RemoveContainer for \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\"" Nov 8 00:31:46.341360 systemd[1]: Removed slice kubepods-burstable-pod25ba1508_1acc_403c_bc11_c7e6e12d17de.slice - libcontainer container kubepods-burstable-pod25ba1508_1acc_403c_bc11_c7e6e12d17de.slice. Nov 8 00:31:46.341452 systemd[1]: kubepods-burstable-pod25ba1508_1acc_403c_bc11_c7e6e12d17de.slice: Consumed 8.130s CPU time. Nov 8 00:31:46.343981 systemd[1]: Removed slice kubepods-besteffort-podf7c72f0b_0e0d_4ded_97f8_13dc3f8a51aa.slice - libcontainer container kubepods-besteffort-podf7c72f0b_0e0d_4ded_97f8_13dc3f8a51aa.slice. Nov 8 00:31:46.347813 containerd[1971]: time="2025-11-08T00:31:46.346700431Z" level=info msg="RemoveContainer for \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\" returns successfully" Nov 8 00:31:46.349285 kubelet[3163]: I1108 00:31:46.349245 3163 scope.go:117] "RemoveContainer" containerID="300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe" Nov 8 00:31:46.378436 containerd[1971]: time="2025-11-08T00:31:46.352351900Z" level=error msg="ContainerStatus for \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\": not found" Nov 8 00:31:46.389974 kubelet[3163]: E1108 00:31:46.389930 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\": not found" containerID="300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe" Nov 8 00:31:46.390188 kubelet[3163]: I1108 00:31:46.389983 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe"} err="failed to get container status \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\": rpc error: code = NotFound desc = an error occurred when try to find container \"300c7b3c3b24e00abb5b6b907d60451e6888fe5644c0bbc992ab7d7b0fdf9afe\": not found" Nov 8 00:31:46.390188 kubelet[3163]: I1108 00:31:46.390099 3163 scope.go:117] "RemoveContainer" containerID="30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4" Nov 8 00:31:46.392325 containerd[1971]: time="2025-11-08T00:31:46.392291970Z" level=info msg="RemoveContainer for \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\"" Nov 8 00:31:46.398552 containerd[1971]: time="2025-11-08T00:31:46.398229209Z" level=info msg="RemoveContainer for \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\" returns successfully" Nov 8 00:31:46.398693 kubelet[3163]: I1108 00:31:46.398495 3163 scope.go:117] "RemoveContainer" containerID="2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66" Nov 8 00:31:46.401970 containerd[1971]: time="2025-11-08T00:31:46.401033253Z" level=info msg="RemoveContainer for \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\"" Nov 8 00:31:46.425246 containerd[1971]: time="2025-11-08T00:31:46.425196174Z" level=info msg="RemoveContainer for \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\" returns successfully" Nov 8 00:31:46.425519 kubelet[3163]: I1108 00:31:46.425474 3163 scope.go:117] "RemoveContainer" containerID="4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f" Nov 8 00:31:46.427091 containerd[1971]: time="2025-11-08T00:31:46.427052921Z" level=info msg="RemoveContainer for \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\"" Nov 8 00:31:46.432072 containerd[1971]: time="2025-11-08T00:31:46.432011393Z" level=info msg="RemoveContainer for \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\" returns successfully" Nov 8 00:31:46.432316 kubelet[3163]: I1108 00:31:46.432291 3163 scope.go:117] "RemoveContainer" containerID="da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b" Nov 8 00:31:46.433946 containerd[1971]: time="2025-11-08T00:31:46.433894266Z" level=info msg="RemoveContainer for \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\"" Nov 8 00:31:46.439257 containerd[1971]: time="2025-11-08T00:31:46.439215941Z" level=info msg="RemoveContainer for \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\" returns successfully" Nov 8 00:31:46.439559 kubelet[3163]: I1108 00:31:46.439487 3163 scope.go:117] "RemoveContainer" containerID="2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878" Nov 8 00:31:46.440648 containerd[1971]: time="2025-11-08T00:31:46.440602235Z" level=info msg="RemoveContainer for \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\"" Nov 8 00:31:46.447099 containerd[1971]: time="2025-11-08T00:31:46.445659552Z" level=info msg="RemoveContainer for \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\" returns successfully" Nov 8 00:31:46.446905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1-rootfs.mount: Deactivated successfully. Nov 8 00:31:46.447259 kubelet[3163]: I1108 00:31:46.446267 3163 scope.go:117] "RemoveContainer" containerID="30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4" Nov 8 00:31:46.447000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1-shm.mount: Deactivated successfully. Nov 8 00:31:46.447060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830-rootfs.mount: Deactivated successfully. Nov 8 00:31:46.447554 systemd[1]: var-lib-kubelet-pods-f7c72f0b\x2d0e0d\x2d4ded\x2d97f8\x2d13dc3f8a51aa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqt5k.mount: Deactivated successfully. Nov 8 00:31:46.447631 containerd[1971]: time="2025-11-08T00:31:46.447594021Z" level=error msg="ContainerStatus for \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\": not found" Nov 8 00:31:46.447767 systemd[1]: var-lib-kubelet-pods-25ba1508\x2d1acc\x2d403c\x2dbc11\x2dc7e6e12d17de-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngz5p.mount: Deactivated successfully. Nov 8 00:31:46.447826 kubelet[3163]: E1108 00:31:46.447799 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\": not found" containerID="30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4" Nov 8 00:31:46.447874 kubelet[3163]: I1108 00:31:46.447840 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4"} err="failed to get container status \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\": rpc error: code = NotFound desc = an error occurred when try to find container \"30a0a5d2801ff4a0149568857ac02735affaf211cc5aef8bc634f1a512af3fa4\": not found" Nov 8 00:31:46.447913 kubelet[3163]: I1108 00:31:46.447881 3163 scope.go:117] "RemoveContainer" containerID="2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66" Nov 8 00:31:46.448042 systemd[1]: var-lib-kubelet-pods-25ba1508\x2d1acc\x2d403c\x2dbc11\x2dc7e6e12d17de-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 8 00:31:46.448202 systemd[1]: var-lib-kubelet-pods-25ba1508\x2d1acc\x2d403c\x2dbc11\x2dc7e6e12d17de-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 8 00:31:46.448252 containerd[1971]: time="2025-11-08T00:31:46.448060175Z" level=error msg="ContainerStatus for \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\": not found" Nov 8 00:31:46.448842 kubelet[3163]: E1108 00:31:46.448369 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\": not found" containerID="2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66" Nov 8 00:31:46.448842 kubelet[3163]: I1108 00:31:46.448397 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66"} err="failed to get container status \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c2310b088fc3d0d9bc4429c8987c054a4aa498f5026e2c3d14b02a9bdf6fe66\": not found" Nov 8 00:31:46.448842 kubelet[3163]: I1108 00:31:46.448421 3163 scope.go:117] "RemoveContainer" containerID="4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f" Nov 8 00:31:46.448842 kubelet[3163]: E1108 00:31:46.448837 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\": not found" containerID="4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f" Nov 8 00:31:46.448975 containerd[1971]: time="2025-11-08T00:31:46.448632916Z" level=error msg="ContainerStatus for \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\": not found" Nov 8 00:31:46.449011 kubelet[3163]: I1108 00:31:46.448860 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f"} err="failed to get container status \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c91d7f74591e73b09a303608eee1366eac92be58ebfc2fdb0a2d02c8b2de81f\": not found" Nov 8 00:31:46.449011 kubelet[3163]: I1108 00:31:46.448877 3163 scope.go:117] "RemoveContainer" containerID="da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b" Nov 8 00:31:46.451596 containerd[1971]: time="2025-11-08T00:31:46.449037993Z" level=error msg="ContainerStatus for \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\": not found" Nov 8 00:31:46.451596 containerd[1971]: time="2025-11-08T00:31:46.449404705Z" level=error msg="ContainerStatus for \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\": not found" Nov 8 00:31:46.451685 kubelet[3163]: E1108 00:31:46.449177 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\": not found" containerID="da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b" Nov 8 00:31:46.451685 kubelet[3163]: I1108 00:31:46.449201 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b"} err="failed to get container status \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\": rpc error: code = NotFound desc = an error occurred when try to find container \"da5b8327affc3d54cc274b46cd1fee187add7f880ef85024b2d3dddd1f2bc57b\": not found" Nov 8 00:31:46.451685 kubelet[3163]: I1108 00:31:46.449217 3163 scope.go:117] "RemoveContainer" containerID="2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878" Nov 8 00:31:46.451685 kubelet[3163]: E1108 00:31:46.449518 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\": not found" containerID="2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878" Nov 8 00:31:46.451685 kubelet[3163]: I1108 00:31:46.449535 3163 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878"} err="failed to get container status \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a59a474822f61d977217ac3e178589f6da2b6bfcc268afa6f7a3cfc8a3c8878\": not found" Nov 8 00:31:47.042435 kubelet[3163]: I1108 00:31:47.042400 3163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ba1508-1acc-403c-bc11-c7e6e12d17de" path="/var/lib/kubelet/pods/25ba1508-1acc-403c-bc11-c7e6e12d17de/volumes" Nov 8 00:31:47.043015 kubelet[3163]: I1108 00:31:47.042985 3163 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa" path="/var/lib/kubelet/pods/f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa/volumes" Nov 8 00:31:47.395170 sshd[4938]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:47.397856 systemd[1]: sshd@23-172.31.22.136:22-139.178.89.65:51920.service: Deactivated successfully. Nov 8 00:31:47.399891 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:31:47.401228 systemd-logind[1950]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:31:47.402776 systemd-logind[1950]: Removed session 24. Nov 8 00:31:47.430464 systemd[1]: Started sshd@24-172.31.22.136:22-139.178.89.65:60308.service - OpenSSH per-connection server daemon (139.178.89.65:60308). Nov 8 00:31:47.609236 sshd[5099]: Accepted publickey for core from 139.178.89.65 port 60308 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:47.611272 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:47.617306 systemd-logind[1950]: New session 25 of user core. Nov 8 00:31:47.624296 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:31:47.654431 ntpd[1942]: Deleting interface #12 lxc_health, fe80::44e7:feff:fe36:feab%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Nov 8 00:31:47.655039 ntpd[1942]: 8 Nov 00:31:47 ntpd[1942]: Deleting interface #12 lxc_health, fe80::44e7:feff:fe36:feab%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Nov 8 00:31:48.117900 sshd[5099]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:48.122915 systemd-logind[1950]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:31:48.123942 systemd[1]: sshd@24-172.31.22.136:22-139.178.89.65:60308.service: Deactivated successfully. Nov 8 00:31:48.126602 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:31:48.129188 systemd-logind[1950]: Removed session 25. Nov 8 00:31:48.132936 kubelet[3163]: I1108 00:31:48.132687 3163 memory_manager.go:355] "RemoveStaleState removing state" podUID="25ba1508-1acc-403c-bc11-c7e6e12d17de" containerName="cilium-agent" Nov 8 00:31:48.132936 kubelet[3163]: I1108 00:31:48.132711 3163 memory_manager.go:355] "RemoveStaleState removing state" podUID="f7c72f0b-0e0d-4ded-97f8-13dc3f8a51aa" containerName="cilium-operator" Nov 8 00:31:48.158418 systemd[1]: Started sshd@25-172.31.22.136:22-139.178.89.65:60320.service - OpenSSH per-connection server daemon (139.178.89.65:60320). Nov 8 00:31:48.216951 kubelet[3163]: I1108 00:31:48.216227 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-lib-modules\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.216951 kubelet[3163]: I1108 00:31:48.216264 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-cilium-run\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.216951 kubelet[3163]: I1108 00:31:48.216285 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a77c0199-3324-4a9f-9307-6a05072aa0cf-cilium-ipsec-secrets\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.216951 kubelet[3163]: I1108 00:31:48.216301 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z9bc\" (UniqueName: \"kubernetes.io/projected/a77c0199-3324-4a9f-9307-6a05072aa0cf-kube-api-access-8z9bc\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.216951 kubelet[3163]: I1108 00:31:48.216318 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-bpf-maps\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.216951 kubelet[3163]: I1108 00:31:48.216334 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-hostproc\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217227 kubelet[3163]: I1108 00:31:48.216349 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-host-proc-sys-kernel\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217227 kubelet[3163]: I1108 00:31:48.216363 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a77c0199-3324-4a9f-9307-6a05072aa0cf-hubble-tls\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217227 kubelet[3163]: I1108 00:31:48.216380 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-etc-cni-netd\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217227 kubelet[3163]: I1108 00:31:48.216395 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-xtables-lock\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217227 kubelet[3163]: I1108 00:31:48.216411 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-cilium-cgroup\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217227 kubelet[3163]: I1108 00:31:48.216425 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a77c0199-3324-4a9f-9307-6a05072aa0cf-clustermesh-secrets\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217437 kubelet[3163]: I1108 00:31:48.216439 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a77c0199-3324-4a9f-9307-6a05072aa0cf-cilium-config-path\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217437 kubelet[3163]: I1108 00:31:48.216460 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-host-proc-sys-net\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.217437 kubelet[3163]: I1108 00:31:48.216477 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a77c0199-3324-4a9f-9307-6a05072aa0cf-cni-path\") pod \"cilium-hzwph\" (UID: \"a77c0199-3324-4a9f-9307-6a05072aa0cf\") " pod="kube-system/cilium-hzwph" Nov 8 00:31:48.218908 systemd[1]: Created slice kubepods-burstable-poda77c0199_3324_4a9f_9307_6a05072aa0cf.slice - libcontainer container kubepods-burstable-poda77c0199_3324_4a9f_9307_6a05072aa0cf.slice. Nov 8 00:31:48.329549 sshd[5111]: Accepted publickey for core from 139.178.89.65 port 60320 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:48.332754 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:48.351963 systemd-logind[1950]: New session 26 of user core. Nov 8 00:31:48.356266 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:31:48.474095 sshd[5111]: pam_unix(sshd:session): session closed for user core Nov 8 00:31:48.478363 systemd[1]: sshd@25-172.31.22.136:22-139.178.89.65:60320.service: Deactivated successfully. Nov 8 00:31:48.480399 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:31:48.481025 systemd-logind[1950]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:31:48.481956 systemd-logind[1950]: Removed session 26. Nov 8 00:31:48.517450 systemd[1]: Started sshd@26-172.31.22.136:22-139.178.89.65:60326.service - OpenSSH per-connection server daemon (139.178.89.65:60326). Nov 8 00:31:48.540931 containerd[1971]: time="2025-11-08T00:31:48.540897300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hzwph,Uid:a77c0199-3324-4a9f-9307-6a05072aa0cf,Namespace:kube-system,Attempt:0,}" Nov 8 00:31:48.570368 containerd[1971]: time="2025-11-08T00:31:48.569989575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:48.570368 containerd[1971]: time="2025-11-08T00:31:48.570044476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:48.570368 containerd[1971]: time="2025-11-08T00:31:48.570055590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:48.570639 containerd[1971]: time="2025-11-08T00:31:48.570493108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:48.590292 systemd[1]: Started cri-containerd-bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665.scope - libcontainer container bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665. Nov 8 00:31:48.616402 containerd[1971]: time="2025-11-08T00:31:48.616342875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hzwph,Uid:a77c0199-3324-4a9f-9307-6a05072aa0cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\"" Nov 8 00:31:48.620206 containerd[1971]: time="2025-11-08T00:31:48.620170556Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:31:48.640868 containerd[1971]: time="2025-11-08T00:31:48.640746101Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76c5a7d219f09c768ea81ea5b58c391b4d456888cebb829dcb5c3f53e8f73fe0\"" Nov 8 00:31:48.642106 containerd[1971]: time="2025-11-08T00:31:48.642053395Z" level=info msg="StartContainer for \"76c5a7d219f09c768ea81ea5b58c391b4d456888cebb829dcb5c3f53e8f73fe0\"" Nov 8 00:31:48.670297 systemd[1]: Started cri-containerd-76c5a7d219f09c768ea81ea5b58c391b4d456888cebb829dcb5c3f53e8f73fe0.scope - libcontainer container 76c5a7d219f09c768ea81ea5b58c391b4d456888cebb829dcb5c3f53e8f73fe0. Nov 8 00:31:48.679185 sshd[5124]: Accepted publickey for core from 139.178.89.65 port 60326 ssh2: RSA SHA256:1oyAPNcvtiF+2laxu2RHNBT3uo794ofoS8dSi3ifLuk Nov 8 00:31:48.683739 sshd[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:31:48.690235 systemd-logind[1950]: New session 27 of user core. Nov 8 00:31:48.695378 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:31:48.706473 containerd[1971]: time="2025-11-08T00:31:48.706370704Z" level=info msg="StartContainer for \"76c5a7d219f09c768ea81ea5b58c391b4d456888cebb829dcb5c3f53e8f73fe0\" returns successfully" Nov 8 00:31:48.726475 systemd[1]: cri-containerd-76c5a7d219f09c768ea81ea5b58c391b4d456888cebb829dcb5c3f53e8f73fe0.scope: Deactivated successfully. Nov 8 00:31:48.771549 containerd[1971]: time="2025-11-08T00:31:48.771300626Z" level=info msg="shim disconnected" id=76c5a7d219f09c768ea81ea5b58c391b4d456888cebb829dcb5c3f53e8f73fe0 namespace=k8s.io Nov 8 00:31:48.771549 containerd[1971]: time="2025-11-08T00:31:48.771371424Z" level=warning msg="cleaning up after shim disconnected" id=76c5a7d219f09c768ea81ea5b58c391b4d456888cebb829dcb5c3f53e8f73fe0 namespace=k8s.io Nov 8 00:31:48.771549 containerd[1971]: time="2025-11-08T00:31:48.771381489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:49.148775 kubelet[3163]: E1108 00:31:49.148702 3163 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:31:49.348721 containerd[1971]: time="2025-11-08T00:31:49.348678217Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:31:49.371798 containerd[1971]: time="2025-11-08T00:31:49.371756132Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4\"" Nov 8 00:31:49.373403 containerd[1971]: time="2025-11-08T00:31:49.372601602Z" level=info msg="StartContainer for \"fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4\"" Nov 8 00:31:49.416330 systemd[1]: Started cri-containerd-fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4.scope - libcontainer container fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4. Nov 8 00:31:49.456349 containerd[1971]: time="2025-11-08T00:31:49.456306944Z" level=info msg="StartContainer for \"fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4\" returns successfully" Nov 8 00:31:49.470477 systemd[1]: cri-containerd-fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4.scope: Deactivated successfully. Nov 8 00:31:49.506947 containerd[1971]: time="2025-11-08T00:31:49.506736420Z" level=info msg="shim disconnected" id=fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4 namespace=k8s.io Nov 8 00:31:49.506947 containerd[1971]: time="2025-11-08T00:31:49.506782503Z" level=warning msg="cleaning up after shim disconnected" id=fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4 namespace=k8s.io Nov 8 00:31:49.506947 containerd[1971]: time="2025-11-08T00:31:49.506790913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:50.325092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb21abfc7e8d99a7f78053a98e7ca338ccf551d409d2190ec3de194a86ba08a4-rootfs.mount: Deactivated successfully. Nov 8 00:31:50.352631 containerd[1971]: time="2025-11-08T00:31:50.352278674Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:31:50.385402 containerd[1971]: time="2025-11-08T00:31:50.385357819Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed\"" Nov 8 00:31:50.386103 containerd[1971]: time="2025-11-08T00:31:50.386053624Z" level=info msg="StartContainer for \"d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed\"" Nov 8 00:31:50.427348 systemd[1]: Started cri-containerd-d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed.scope - libcontainer container d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed. Nov 8 00:31:50.455105 containerd[1971]: time="2025-11-08T00:31:50.455048895Z" level=info msg="StartContainer for \"d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed\" returns successfully" Nov 8 00:31:50.462718 systemd[1]: cri-containerd-d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed.scope: Deactivated successfully. Nov 8 00:31:50.495870 containerd[1971]: time="2025-11-08T00:31:50.495730541Z" level=info msg="shim disconnected" id=d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed namespace=k8s.io Nov 8 00:31:50.496236 containerd[1971]: time="2025-11-08T00:31:50.495876195Z" level=warning msg="cleaning up after shim disconnected" id=d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed namespace=k8s.io Nov 8 00:31:50.496236 containerd[1971]: time="2025-11-08T00:31:50.495892465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:51.325118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7edb4e444ba3a1546341d67bf962bdc9ef92916e77feba7a49f268324ecb9ed-rootfs.mount: Deactivated successfully. Nov 8 00:31:51.356995 containerd[1971]: time="2025-11-08T00:31:51.356424286Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:31:51.375573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210851391.mount: Deactivated successfully. Nov 8 00:31:51.382349 containerd[1971]: time="2025-11-08T00:31:51.382234774Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559\"" Nov 8 00:31:51.387737 containerd[1971]: time="2025-11-08T00:31:51.387708871Z" level=info msg="StartContainer for \"0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559\"" Nov 8 00:31:51.423341 systemd[1]: Started cri-containerd-0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559.scope - libcontainer container 0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559. Nov 8 00:31:51.448767 systemd[1]: cri-containerd-0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559.scope: Deactivated successfully. Nov 8 00:31:51.451381 kubelet[3163]: I1108 00:31:51.449684 3163 setters.go:602] "Node became not ready" node="ip-172-31-22-136" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-08T00:31:51Z","lastTransitionTime":"2025-11-08T00:31:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 8 00:31:51.454364 containerd[1971]: time="2025-11-08T00:31:51.454032853Z" level=info msg="StartContainer for \"0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559\" returns successfully" Nov 8 00:31:51.494827 containerd[1971]: time="2025-11-08T00:31:51.494749847Z" level=info msg="shim disconnected" id=0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559 namespace=k8s.io Nov 8 00:31:51.494827 containerd[1971]: time="2025-11-08T00:31:51.494801629Z" level=warning msg="cleaning up after shim disconnected" id=0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559 namespace=k8s.io Nov 8 00:31:51.494827 containerd[1971]: time="2025-11-08T00:31:51.494810789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:31:52.325317 systemd[1]: run-containerd-runc-k8s.io-0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559-runc.hXp40h.mount: Deactivated successfully. Nov 8 00:31:52.325424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a2a2153f511246fa6e30f725fa6b0fb10ddf4922dec0eeab397c6df19f0b559-rootfs.mount: Deactivated successfully. Nov 8 00:31:52.360766 containerd[1971]: time="2025-11-08T00:31:52.360675292Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:31:52.385417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217040421.mount: Deactivated successfully. Nov 8 00:31:52.390383 containerd[1971]: time="2025-11-08T00:31:52.390327313Z" level=info msg="CreateContainer within sandbox \"bdfa39d4ebabc855c10351f48e1d437ca3c19ea399203a318e9cae22f1290665\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e9e4d28c9869620749e10254950b16513d664e608bdb04925cfaca8174b44990\"" Nov 8 00:31:52.392323 containerd[1971]: time="2025-11-08T00:31:52.391152111Z" level=info msg="StartContainer for \"e9e4d28c9869620749e10254950b16513d664e608bdb04925cfaca8174b44990\"" Nov 8 00:31:52.428280 systemd[1]: Started cri-containerd-e9e4d28c9869620749e10254950b16513d664e608bdb04925cfaca8174b44990.scope - libcontainer container e9e4d28c9869620749e10254950b16513d664e608bdb04925cfaca8174b44990. Nov 8 00:31:52.465917 containerd[1971]: time="2025-11-08T00:31:52.465871929Z" level=info msg="StartContainer for \"e9e4d28c9869620749e10254950b16513d664e608bdb04925cfaca8174b44990\" returns successfully" Nov 8 00:31:53.085223 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 8 00:31:53.379872 kubelet[3163]: I1108 00:31:53.379576 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hzwph" podStartSLOduration=5.379556137 podStartE2EDuration="5.379556137s" podCreationTimestamp="2025-11-08 00:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:53.379189504 +0000 UTC m=+94.480980488" watchObservedRunningTime="2025-11-08 00:31:53.379556137 +0000 UTC m=+94.481347121" Nov 8 00:31:55.345113 systemd[1]: run-containerd-runc-k8s.io-e9e4d28c9869620749e10254950b16513d664e608bdb04925cfaca8174b44990-runc.eUIffm.mount: Deactivated successfully. Nov 8 00:31:56.104766 systemd-networkd[1864]: lxc_health: Link UP Nov 8 00:31:56.112108 systemd-networkd[1864]: lxc_health: Gained carrier Nov 8 00:31:56.116516 (udev-worker)[5966]: Network interface NamePolicy= disabled on kernel command line. Nov 8 00:31:57.198346 systemd-networkd[1864]: lxc_health: Gained IPv6LL Nov 8 00:31:57.526918 systemd[1]: run-containerd-runc-k8s.io-e9e4d28c9869620749e10254950b16513d664e608bdb04925cfaca8174b44990-runc.rkBIE5.mount: Deactivated successfully. Nov 8 00:31:59.654443 ntpd[1942]: Listen normally on 15 lxc_health [fe80::8f2:65ff:fe33:9c24%14]:123 Nov 8 00:31:59.656491 ntpd[1942]: 8 Nov 00:31:59 ntpd[1942]: Listen normally on 15 lxc_health [fe80::8f2:65ff:fe33:9c24%14]:123 Nov 8 00:32:02.043487 sshd[5124]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:02.049581 systemd-logind[1950]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:32:02.050810 systemd[1]: sshd@26-172.31.22.136:22-139.178.89.65:60326.service: Deactivated successfully. Nov 8 00:32:02.053581 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:32:02.054692 systemd-logind[1950]: Removed session 27. Nov 8 00:32:16.421448 systemd[1]: cri-containerd-937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae.scope: Deactivated successfully. Nov 8 00:32:16.422029 systemd[1]: cri-containerd-937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae.scope: Consumed 2.481s CPU time, 44.8M memory peak, 0B memory swap peak. Nov 8 00:32:16.455043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae-rootfs.mount: Deactivated successfully. Nov 8 00:32:16.475678 containerd[1971]: time="2025-11-08T00:32:16.475598214Z" level=info msg="shim disconnected" id=937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae namespace=k8s.io Nov 8 00:32:16.475678 containerd[1971]: time="2025-11-08T00:32:16.475670460Z" level=warning msg="cleaning up after shim disconnected" id=937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae namespace=k8s.io Nov 8 00:32:16.475678 containerd[1971]: time="2025-11-08T00:32:16.475683001Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:32:17.414434 kubelet[3163]: I1108 00:32:17.414332 3163 scope.go:117] "RemoveContainer" containerID="937af8ffabc6e8ed781cde195923b864ff659c149f748efb1f90c4c7c3e8fdae" Nov 8 00:32:17.421068 containerd[1971]: time="2025-11-08T00:32:17.421026690Z" level=info msg="CreateContainer within sandbox \"ffd98569774d473b7e7123d2a5a0c3f90ca4e0c687054883a1fb1fa487355033\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:32:17.443218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918776766.mount: Deactivated successfully. Nov 8 00:32:17.452980 containerd[1971]: time="2025-11-08T00:32:17.452918722Z" level=info msg="CreateContainer within sandbox \"ffd98569774d473b7e7123d2a5a0c3f90ca4e0c687054883a1fb1fa487355033\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cf22b35b93fa1106ccce87c84ad71ff7109215a31a80f0ff590bcedaac354eba\"" Nov 8 00:32:17.453541 containerd[1971]: time="2025-11-08T00:32:17.453511245Z" level=info msg="StartContainer for \"cf22b35b93fa1106ccce87c84ad71ff7109215a31a80f0ff590bcedaac354eba\"" Nov 8 00:32:17.494388 systemd[1]: Started cri-containerd-cf22b35b93fa1106ccce87c84ad71ff7109215a31a80f0ff590bcedaac354eba.scope - libcontainer container cf22b35b93fa1106ccce87c84ad71ff7109215a31a80f0ff590bcedaac354eba. Nov 8 00:32:17.544141 containerd[1971]: time="2025-11-08T00:32:17.544051323Z" level=info msg="StartContainer for \"cf22b35b93fa1106ccce87c84ad71ff7109215a31a80f0ff590bcedaac354eba\" returns successfully" Nov 8 00:32:19.049258 containerd[1971]: time="2025-11-08T00:32:19.049199829Z" level=info msg="StopPodSandbox for \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\"" Nov 8 00:32:19.050601 containerd[1971]: time="2025-11-08T00:32:19.049301638Z" level=info msg="TearDown network for sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" successfully" Nov 8 00:32:19.050601 containerd[1971]: time="2025-11-08T00:32:19.049316958Z" level=info msg="StopPodSandbox for \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" returns successfully" Nov 8 00:32:19.050601 containerd[1971]: time="2025-11-08T00:32:19.049821363Z" level=info msg="RemovePodSandbox for \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\"" Nov 8 00:32:19.052892 containerd[1971]: time="2025-11-08T00:32:19.052857632Z" level=info msg="Forcibly stopping sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\"" Nov 8 00:32:19.053027 containerd[1971]: time="2025-11-08T00:32:19.052962666Z" level=info msg="TearDown network for sandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" successfully" Nov 8 00:32:19.058369 containerd[1971]: time="2025-11-08T00:32:19.058323631Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:19.058501 containerd[1971]: time="2025-11-08T00:32:19.058394623Z" level=info msg="RemovePodSandbox \"9b7c31953d57a38bf1286842cff1b66c12f84304ba0ea5cde097ee337499f830\" returns successfully" Nov 8 00:32:19.059027 containerd[1971]: time="2025-11-08T00:32:19.059001912Z" level=info msg="StopPodSandbox for \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\"" Nov 8 00:32:19.059123 containerd[1971]: time="2025-11-08T00:32:19.059114268Z" level=info msg="TearDown network for sandbox \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\" successfully" Nov 8 00:32:19.059154 containerd[1971]: time="2025-11-08T00:32:19.059126166Z" level=info msg="StopPodSandbox for \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\" returns successfully" Nov 8 00:32:19.059431 containerd[1971]: time="2025-11-08T00:32:19.059401373Z" level=info msg="RemovePodSandbox for \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\"" Nov 8 00:32:19.059431 containerd[1971]: time="2025-11-08T00:32:19.059426457Z" level=info msg="Forcibly stopping sandbox \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\"" Nov 8 00:32:19.059495 containerd[1971]: time="2025-11-08T00:32:19.059472403Z" level=info msg="TearDown network for sandbox \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\" successfully" Nov 8 00:32:19.064627 containerd[1971]: time="2025-11-08T00:32:19.064562604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:32:19.064627 containerd[1971]: time="2025-11-08T00:32:19.064628161Z" level=info msg="RemovePodSandbox \"8655a6bea9be7b13e9265168b02cdd35a32802d1ba4866af71386e3d34bcd1d1\" returns successfully" Nov 8 00:32:21.015307 systemd[1]: cri-containerd-2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e.scope: Deactivated successfully. Nov 8 00:32:21.015544 systemd[1]: cri-containerd-2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e.scope: Consumed 1.694s CPU time, 20.3M memory peak, 0B memory swap peak. Nov 8 00:32:21.046406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e-rootfs.mount: Deactivated successfully. Nov 8 00:32:21.056155 kubelet[3163]: E1108 00:32:21.056066 3163 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-136?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 8 00:32:21.073041 containerd[1971]: time="2025-11-08T00:32:21.072984501Z" level=info msg="shim disconnected" id=2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e namespace=k8s.io Nov 8 00:32:21.073041 containerd[1971]: time="2025-11-08T00:32:21.073033694Z" level=warning msg="cleaning up after shim disconnected" id=2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e namespace=k8s.io Nov 8 00:32:21.073041 containerd[1971]: time="2025-11-08T00:32:21.073045946Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:32:21.425330 kubelet[3163]: I1108 00:32:21.425298 3163 scope.go:117] "RemoveContainer" containerID="2e8a7390aecce72ab59606ec737fe019ae34d31df13fd1ece76f9bd0e5aba92e" Nov 8 00:32:21.427719 containerd[1971]: time="2025-11-08T00:32:21.427679280Z" level=info msg="CreateContainer within sandbox \"43d0720f751a0a43546b98477f047a044a5377f7e356999a992413b088732e72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:32:21.453116 containerd[1971]: time="2025-11-08T00:32:21.453058893Z" level=info msg="CreateContainer within sandbox \"43d0720f751a0a43546b98477f047a044a5377f7e356999a992413b088732e72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7e892782fdc3082fc3ac0a0dd647f0a4d8d8b9a0a2ae973d225cf36c62505717\"" Nov 8 00:32:21.453743 containerd[1971]: time="2025-11-08T00:32:21.453659736Z" level=info msg="StartContainer for \"7e892782fdc3082fc3ac0a0dd647f0a4d8d8b9a0a2ae973d225cf36c62505717\"" Nov 8 00:32:21.481288 systemd[1]: Started cri-containerd-7e892782fdc3082fc3ac0a0dd647f0a4d8d8b9a0a2ae973d225cf36c62505717.scope - libcontainer container 7e892782fdc3082fc3ac0a0dd647f0a4d8d8b9a0a2ae973d225cf36c62505717. Nov 8 00:32:21.525151 containerd[1971]: time="2025-11-08T00:32:21.525052153Z" level=info msg="StartContainer for \"7e892782fdc3082fc3ac0a0dd647f0a4d8d8b9a0a2ae973d225cf36c62505717\" returns successfully" Nov 8 00:32:31.057719 kubelet[3163]: E1108 00:32:31.057319 3163 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.136:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-136?timeout=10s\": context deadline exceeded"