Jan 17 00:20:53.921971 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:20:53.921994 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:20:53.922007 kernel: BIOS-provided physical RAM map: Jan 17 00:20:53.922014 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:20:53.922020 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Jan 17 00:20:53.922027 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Jan 17 00:20:53.922035 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Jan 17 00:20:53.922042 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Jan 17 00:20:53.922048 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Jan 17 00:20:53.922058 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Jan 17 00:20:53.922065 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Jan 17 00:20:53.922072 kernel: NX (Execute Disable) protection: active Jan 17 00:20:53.922078 kernel: APIC: Static calls initialized Jan 17 00:20:53.922085 kernel: efi: EFI v2.7 by EDK II Jan 17 00:20:53.922095 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Jan 17 00:20:53.922105 kernel: SMBIOS 2.7 present. Jan 17 00:20:53.922113 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 17 00:20:53.922131 kernel: Hypervisor detected: KVM Jan 17 00:20:53.922140 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:20:53.922148 kernel: kvm-clock: using sched offset of 4156558589 cycles Jan 17 00:20:53.922156 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:20:53.922164 kernel: tsc: Detected 2499.996 MHz processor Jan 17 00:20:53.922173 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:20:53.922181 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:20:53.922189 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Jan 17 00:20:53.922200 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:20:53.922208 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:20:53.922216 kernel: Using GB pages for direct mapping Jan 17 00:20:53.922224 kernel: Secure boot disabled Jan 17 00:20:53.922231 kernel: ACPI: Early table checksum verification disabled Jan 17 00:20:53.922239 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Jan 17 00:20:53.922247 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 00:20:53.922255 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 00:20:53.922263 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 00:20:53.922274 kernel: ACPI: FACS 0x00000000789D0000 000040 Jan 17 00:20:53.922282 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 17 00:20:53.922290 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 00:20:53.922298 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 00:20:53.922305 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 17 00:20:53.922314 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 17 00:20:53.922326 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:20:53.922336 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 17 00:20:53.922345 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Jan 17 00:20:53.922353 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Jan 17 00:20:53.922361 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Jan 17 00:20:53.922370 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Jan 17 00:20:53.922378 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Jan 17 00:20:53.922386 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Jan 17 00:20:53.922397 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Jan 17 00:20:53.922405 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Jan 17 00:20:53.922413 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Jan 17 00:20:53.922422 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Jan 17 00:20:53.922430 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Jan 17 00:20:53.922438 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Jan 17 00:20:53.922459 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:20:53.922468 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:20:53.922476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 17 00:20:53.922488 kernel: NUMA: Initialized distance table, cnt=1 Jan 17 00:20:53.922496 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Jan 17 00:20:53.922504 kernel: Zone ranges: Jan 17 00:20:53.922512 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:20:53.922521 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Jan 17 00:20:53.922532 kernel: Normal empty Jan 17 00:20:53.922541 kernel: Movable zone start for each node Jan 17 00:20:53.922549 kernel: Early memory node ranges Jan 17 00:20:53.924527 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:20:53.924555 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Jan 17 00:20:53.924605 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Jan 17 00:20:53.924613 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Jan 17 00:20:53.924622 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:20:53.924630 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:20:53.924639 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:20:53.924648 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Jan 17 00:20:53.924656 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 00:20:53.924665 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:20:53.924673 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 17 00:20:53.924685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:20:53.924693 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:20:53.924702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:20:53.924710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:20:53.924719 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:20:53.924727 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:20:53.924736 kernel: TSC deadline timer available Jan 17 00:20:53.924744 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:20:53.924753 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:20:53.924764 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Jan 17 00:20:53.924773 kernel: Booting paravirtualized kernel on KVM Jan 17 00:20:53.924781 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:20:53.924790 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:20:53.924798 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:20:53.924807 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:20:53.924815 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:20:53.924824 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:20:53.924832 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:20:53.924845 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:20:53.924854 kernel: random: crng init done Jan 17 00:20:53.924862 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:20:53.924871 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:20:53.924879 kernel: Fallback order for Node 0: 0 Jan 17 00:20:53.924888 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Jan 17 00:20:53.924896 kernel: Policy zone: DMA32 Jan 17 00:20:53.924905 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:20:53.924916 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 162916K reserved, 0K cma-reserved) Jan 17 00:20:53.924925 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:20:53.924934 kernel: Kernel/User page tables isolation: enabled Jan 17 00:20:53.924942 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:20:53.924950 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:20:53.924959 kernel: Dynamic Preempt: voluntary Jan 17 00:20:53.924967 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:20:53.924980 kernel: rcu: RCU event tracing is enabled. Jan 17 00:20:53.924989 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:20:53.925000 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:20:53.925009 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:20:53.925017 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:20:53.925025 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:20:53.925034 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:20:53.925042 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 00:20:53.925051 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:20:53.925071 kernel: Console: colour dummy device 80x25 Jan 17 00:20:53.925080 kernel: printk: console [tty0] enabled Jan 17 00:20:53.925089 kernel: printk: console [ttyS0] enabled Jan 17 00:20:53.925098 kernel: ACPI: Core revision 20230628 Jan 17 00:20:53.925107 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 17 00:20:53.925118 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:20:53.925127 kernel: x2apic enabled Jan 17 00:20:53.925137 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:20:53.925146 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 00:20:53.925155 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jan 17 00:20:53.925166 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:20:53.925175 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:20:53.925185 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:20:53.925194 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:20:53.925202 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:20:53.925211 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:20:53.925220 kernel: RETBleed: Vulnerable Jan 17 00:20:53.925229 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:20:53.925238 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:20:53.925247 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:20:53.925258 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 00:20:53.925267 kernel: active return thunk: its_return_thunk Jan 17 00:20:53.925276 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:20:53.925285 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:20:53.925294 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:20:53.925303 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:20:53.925311 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 17 00:20:53.925320 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 17 00:20:53.925329 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:20:53.925338 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:20:53.925346 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:20:53.925358 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 17 00:20:53.925366 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:20:53.925375 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 17 00:20:53.925384 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 17 00:20:53.925393 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 17 00:20:53.925402 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 17 00:20:53.925411 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 17 00:20:53.925419 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 17 00:20:53.925436 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 17 00:20:53.925445 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:20:53.925454 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:20:53.925465 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:20:53.925474 kernel: landlock: Up and running. Jan 17 00:20:53.925483 kernel: SELinux: Initializing. Jan 17 00:20:53.925492 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:20:53.925501 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:20:53.925510 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:20:53.925519 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:20:53.925528 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:20:53.925537 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:20:53.925547 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:20:53.925571 kernel: signal: max sigframe size: 3632 Jan 17 00:20:53.926645 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:20:53.926656 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:20:53.926666 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:20:53.926675 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:20:53.926684 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:20:53.926693 kernel: .... node #0, CPUs: #1 Jan 17 00:20:53.926703 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 00:20:53.926713 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:20:53.926727 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:20:53.926736 kernel: smpboot: Max logical packages: 1 Jan 17 00:20:53.926746 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jan 17 00:20:53.926755 kernel: devtmpfs: initialized Jan 17 00:20:53.926764 kernel: x86/mm: Memory block size: 128MB Jan 17 00:20:53.926773 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Jan 17 00:20:53.926783 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:20:53.926792 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:20:53.926801 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:20:53.926812 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:20:53.926821 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:20:53.926830 kernel: audit: type=2000 audit(1768609252.913:1): state=initialized audit_enabled=0 res=1 Jan 17 00:20:53.926839 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:20:53.926848 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:20:53.926857 kernel: cpuidle: using governor menu Jan 17 00:20:53.926866 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:20:53.926875 kernel: dca service started, version 1.12.1 Jan 17 00:20:53.926885 kernel: PCI: Using configuration type 1 for base access Jan 17 00:20:53.926896 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:20:53.926905 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:20:53.926914 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:20:53.926923 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:20:53.926932 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:20:53.926941 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:20:53.926950 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:20:53.926959 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:20:53.926968 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 00:20:53.926980 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:20:53.926989 kernel: ACPI: Interpreter enabled Jan 17 00:20:53.926997 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:20:53.927006 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:20:53.927015 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:20:53.927024 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:20:53.927034 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 17 00:20:53.927042 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:20:53.927202 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:20:53.927307 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 00:20:53.927400 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 00:20:53.927412 kernel: acpiphp: Slot [3] registered Jan 17 00:20:53.927421 kernel: acpiphp: Slot [4] registered Jan 17 00:20:53.927430 kernel: acpiphp: Slot [5] registered Jan 17 00:20:53.927439 kernel: acpiphp: Slot [6] registered Jan 17 00:20:53.927448 kernel: acpiphp: Slot [7] registered Jan 17 00:20:53.927459 kernel: acpiphp: Slot [8] registered Jan 17 00:20:53.927468 kernel: acpiphp: Slot [9] registered Jan 17 00:20:53.927477 kernel: acpiphp: Slot [10] registered Jan 17 00:20:53.927486 kernel: acpiphp: Slot [11] registered Jan 17 00:20:53.927495 kernel: acpiphp: Slot [12] registered Jan 17 00:20:53.927504 kernel: acpiphp: Slot [13] registered Jan 17 00:20:53.927513 kernel: acpiphp: Slot [14] registered Jan 17 00:20:53.927522 kernel: acpiphp: Slot [15] registered Jan 17 00:20:53.927531 kernel: acpiphp: Slot [16] registered Jan 17 00:20:53.927539 kernel: acpiphp: Slot [17] registered Jan 17 00:20:53.927551 kernel: acpiphp: Slot [18] registered Jan 17 00:20:53.928620 kernel: acpiphp: Slot [19] registered Jan 17 00:20:53.928637 kernel: acpiphp: Slot [20] registered Jan 17 00:20:53.928647 kernel: acpiphp: Slot [21] registered Jan 17 00:20:53.928656 kernel: acpiphp: Slot [22] registered Jan 17 00:20:53.928665 kernel: acpiphp: Slot [23] registered Jan 17 00:20:53.928674 kernel: acpiphp: Slot [24] registered Jan 17 00:20:53.928683 kernel: acpiphp: Slot [25] registered Jan 17 00:20:53.928692 kernel: acpiphp: Slot [26] registered Jan 17 00:20:53.928705 kernel: acpiphp: Slot [27] registered Jan 17 00:20:53.928714 kernel: acpiphp: Slot [28] registered Jan 17 00:20:53.928723 kernel: acpiphp: Slot [29] registered Jan 17 00:20:53.928732 kernel: acpiphp: Slot [30] registered Jan 17 00:20:53.928741 kernel: acpiphp: Slot [31] registered Jan 17 00:20:53.928750 kernel: PCI host bridge to bus 0000:00 Jan 17 00:20:53.928885 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:20:53.928974 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:20:53.929062 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:20:53.929145 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 17 00:20:53.929227 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Jan 17 00:20:53.929309 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:20:53.929419 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 00:20:53.929532 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 17 00:20:53.929648 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 17 00:20:53.929750 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 00:20:53.929844 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 17 00:20:53.929937 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 17 00:20:53.930030 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 17 00:20:53.930123 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 17 00:20:53.930215 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 17 00:20:53.930307 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 17 00:20:53.930409 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 17 00:20:53.930504 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Jan 17 00:20:53.932793 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:20:53.932912 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Jan 17 00:20:53.933008 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:20:53.933110 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 00:20:53.933212 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Jan 17 00:20:53.933315 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 00:20:53.933407 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Jan 17 00:20:53.933419 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:20:53.933438 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:20:53.933448 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:20:53.933457 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:20:53.933466 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 00:20:53.933480 kernel: iommu: Default domain type: Translated Jan 17 00:20:53.933489 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:20:53.933498 kernel: efivars: Registered efivars operations Jan 17 00:20:53.933507 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:20:53.933517 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:20:53.933526 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Jan 17 00:20:53.933535 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Jan 17 00:20:53.933639 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 17 00:20:53.933730 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 17 00:20:53.933824 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:20:53.933837 kernel: vgaarb: loaded Jan 17 00:20:53.933846 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 17 00:20:53.933855 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 17 00:20:53.933864 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:20:53.933873 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:20:53.933883 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:20:53.933892 kernel: pnp: PnP ACPI init Jan 17 00:20:53.933900 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:20:53.933913 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:20:53.933922 kernel: NET: Registered PF_INET protocol family Jan 17 00:20:53.933932 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:20:53.933941 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:20:53.933950 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:20:53.933959 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:20:53.933968 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:20:53.933977 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:20:53.933986 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:20:53.933998 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:20:53.934007 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:20:53.934016 kernel: NET: Registered PF_XDP protocol family Jan 17 00:20:53.934106 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:20:53.934279 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:20:53.934368 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:20:53.934451 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 17 00:20:53.934533 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Jan 17 00:20:53.937960 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 00:20:53.937989 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:20:53.938000 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:20:53.938010 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jan 17 00:20:53.938019 kernel: clocksource: Switched to clocksource tsc Jan 17 00:20:53.938028 kernel: Initialise system trusted keyrings Jan 17 00:20:53.938038 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:20:53.938047 kernel: Key type asymmetric registered Jan 17 00:20:53.938063 kernel: Asymmetric key parser 'x509' registered Jan 17 00:20:53.938072 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:20:53.938081 kernel: io scheduler mq-deadline registered Jan 17 00:20:53.938091 kernel: io scheduler kyber registered Jan 17 00:20:53.938100 kernel: io scheduler bfq registered Jan 17 00:20:53.938109 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:20:53.938118 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:20:53.938127 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:20:53.938136 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:20:53.938149 kernel: i8042: Warning: Keylock active Jan 17 00:20:53.938158 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:20:53.938167 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:20:53.938279 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 00:20:53.938371 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 00:20:53.938459 kernel: rtc_cmos 00:00: setting system clock to 2026-01-17T00:20:53 UTC (1768609253) Jan 17 00:20:53.938547 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 00:20:53.938579 kernel: intel_pstate: CPU model not supported Jan 17 00:20:53.938592 kernel: efifb: probing for efifb Jan 17 00:20:53.938602 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Jan 17 00:20:53.938612 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Jan 17 00:20:53.938621 kernel: efifb: scrolling: redraw Jan 17 00:20:53.938630 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:20:53.938639 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:20:53.938648 kernel: fb0: EFI VGA frame buffer device Jan 17 00:20:53.938657 kernel: pstore: Using crash dump compression: deflate Jan 17 00:20:53.938666 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:20:53.938678 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:20:53.938687 kernel: Segment Routing with IPv6 Jan 17 00:20:53.938696 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:20:53.938705 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:20:53.938714 kernel: Key type dns_resolver registered Jan 17 00:20:53.938723 kernel: IPI shorthand broadcast: enabled Jan 17 00:20:53.938753 kernel: sched_clock: Marking stable (455002956, 124700328)->(665252039, -85548755) Jan 17 00:20:53.938766 kernel: registered taskstats version 1 Jan 17 00:20:53.938776 kernel: Loading compiled-in X.509 certificates Jan 17 00:20:53.938785 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:20:53.938798 kernel: Key type .fscrypt registered Jan 17 00:20:53.938807 kernel: Key type fscrypt-provisioning registered Jan 17 00:20:53.938816 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:20:53.938826 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:20:53.938835 kernel: ima: No architecture policies found Jan 17 00:20:53.938845 kernel: clk: Disabling unused clocks Jan 17 00:20:53.938855 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:20:53.938864 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:20:53.938874 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:20:53.938886 kernel: Run /init as init process Jan 17 00:20:53.938896 kernel: with arguments: Jan 17 00:20:53.938905 kernel: /init Jan 17 00:20:53.938914 kernel: with environment: Jan 17 00:20:53.938924 kernel: HOME=/ Jan 17 00:20:53.938933 kernel: TERM=linux Jan 17 00:20:53.938946 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:20:53.938959 systemd[1]: Detected virtualization amazon. Jan 17 00:20:53.938972 systemd[1]: Detected architecture x86-64. Jan 17 00:20:53.938982 systemd[1]: Running in initrd. Jan 17 00:20:53.938991 systemd[1]: No hostname configured, using default hostname. Jan 17 00:20:53.939001 systemd[1]: Hostname set to . Jan 17 00:20:53.939011 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:20:53.939021 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:20:53.939030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:20:53.939040 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:20:53.939054 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:20:53.939064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:20:53.939074 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:20:53.939087 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:20:53.939100 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:20:53.939111 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:20:53.939121 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:20:53.939131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:20:53.939141 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:20:53.939151 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:20:53.939161 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:20:53.939171 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:20:53.939184 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:20:53.939197 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:20:53.939207 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:20:53.939217 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:20:53.939227 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:20:53.939237 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:20:53.939247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:20:53.939258 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:20:53.939270 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:20:53.939280 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:20:53.939290 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:20:53.939300 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:20:53.939310 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:20:53.939320 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:20:53.939330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:53.939340 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:20:53.939374 systemd-journald[179]: Collecting audit messages is disabled. Jan 17 00:20:53.939401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:20:53.939411 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:20:53.939425 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:20:53.939436 systemd-journald[179]: Journal started Jan 17 00:20:53.939458 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2b7a1620720c42a2712fa2378a10d4) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:20:53.920686 systemd-modules-load[180]: Inserted module 'overlay' Jan 17 00:20:53.949581 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:20:53.950010 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:53.957857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:20:53.969582 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:20:53.970929 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:20:53.979314 kernel: Bridge firewalling registered Jan 17 00:20:53.972933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:20:53.974793 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 17 00:20:53.981077 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:20:53.985143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:20:53.995765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:20:53.998722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:20:53.999659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:20:54.008445 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:20:54.015018 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:20:54.016427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:20:54.021797 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:20:54.027943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:20:54.037072 dracut-cmdline[210]: dracut-dracut-053 Jan 17 00:20:54.042015 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:20:54.083967 systemd-resolved[213]: Positive Trust Anchors: Jan 17 00:20:54.083983 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:20:54.084047 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:20:54.094095 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 17 00:20:54.095516 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:20:54.096250 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:20:54.129614 kernel: SCSI subsystem initialized Jan 17 00:20:54.139589 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:20:54.150589 kernel: iscsi: registered transport (tcp) Jan 17 00:20:54.172881 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:20:54.172967 kernel: QLogic iSCSI HBA Driver Jan 17 00:20:54.211525 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:20:54.216751 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:20:54.249879 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:20:54.249958 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:20:54.249981 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:20:54.292593 kernel: raid6: avx512x4 gen() 17985 MB/s Jan 17 00:20:54.310583 kernel: raid6: avx512x2 gen() 17969 MB/s Jan 17 00:20:54.328583 kernel: raid6: avx512x1 gen() 17900 MB/s Jan 17 00:20:54.346581 kernel: raid6: avx2x4 gen() 17835 MB/s Jan 17 00:20:54.364582 kernel: raid6: avx2x2 gen() 17816 MB/s Jan 17 00:20:54.382871 kernel: raid6: avx2x1 gen() 13798 MB/s Jan 17 00:20:54.382936 kernel: raid6: using algorithm avx512x4 gen() 17985 MB/s Jan 17 00:20:54.401797 kernel: raid6: .... xor() 7816 MB/s, rmw enabled Jan 17 00:20:54.401847 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:20:54.423602 kernel: xor: automatically using best checksumming function avx Jan 17 00:20:54.583598 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:20:54.593925 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:20:54.600763 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:20:54.614763 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 17 00:20:54.619921 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:20:54.630303 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:20:54.646972 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jan 17 00:20:54.677461 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:20:54.681791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:20:54.736184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:20:54.745971 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:20:54.765931 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:20:54.774395 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:20:54.775643 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:20:54.776220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:20:54.785767 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:20:54.812052 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:20:54.838001 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:20:54.846844 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 00:20:54.847128 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 00:20:54.861608 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 17 00:20:54.866672 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:20:54.867662 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:20:54.878693 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:b0:ec:df:d8:1f Jan 17 00:20:54.868499 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:20:54.869155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:20:54.869338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:54.872114 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:54.880968 (udev-worker)[441]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:20:54.887245 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:54.908126 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:20:54.908192 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 00:20:54.908425 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 17 00:20:54.908448 kernel: AES CTR mode by8 optimization enabled Jan 17 00:20:54.911974 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:20:54.912937 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:54.923579 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 00:20:54.926960 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:20:54.932300 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:20:54.932355 kernel: GPT:9289727 != 33554431 Jan 17 00:20:54.932377 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:20:54.934167 kernel: GPT:9289727 != 33554431 Jan 17 00:20:54.936337 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:20:54.936386 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:54.953859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:20:54.959981 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:20:54.991607 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:20:55.057607 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (442) Jan 17 00:20:55.066261 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (441) Jan 17 00:20:55.074547 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 00:20:55.115735 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 00:20:55.121144 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 00:20:55.121704 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 00:20:55.128574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:20:55.133803 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:20:55.140771 disk-uuid[631]: Primary Header is updated. Jan 17 00:20:55.140771 disk-uuid[631]: Secondary Entries is updated. Jan 17 00:20:55.140771 disk-uuid[631]: Secondary Header is updated. Jan 17 00:20:55.145598 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:55.152600 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:55.157603 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:56.157596 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:20:56.157898 disk-uuid[632]: The operation has completed successfully. Jan 17 00:20:56.266495 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:20:56.266604 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:20:56.285762 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:20:56.289408 sh[975]: Success Jan 17 00:20:56.310589 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:20:56.405239 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:20:56.414622 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:20:56.418364 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:20:56.447177 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:20:56.447240 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:20:56.447255 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:20:56.450352 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:20:56.450405 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:20:56.582585 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:20:56.596617 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:20:56.597807 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:20:56.601732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:20:56.604481 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:20:56.623602 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:56.625603 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:20:56.625653 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:20:56.650656 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:20:56.680810 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:56.680860 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:20:56.688955 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:20:56.696837 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:20:56.734061 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:20:56.743771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:20:56.764193 systemd-networkd[1167]: lo: Link UP Jan 17 00:20:56.764206 systemd-networkd[1167]: lo: Gained carrier Jan 17 00:20:56.766148 systemd-networkd[1167]: Enumeration completed Jan 17 00:20:56.766871 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:20:56.766980 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:20:56.766985 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:20:56.768162 systemd[1]: Reached target network.target - Network. Jan 17 00:20:56.771012 systemd-networkd[1167]: eth0: Link UP Jan 17 00:20:56.771018 systemd-networkd[1167]: eth0: Gained carrier Jan 17 00:20:56.771031 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:20:56.782658 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.17.137/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:20:57.043159 ignition[1113]: Ignition 2.19.0 Jan 17 00:20:57.043170 ignition[1113]: Stage: fetch-offline Jan 17 00:20:57.044792 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:20:57.043372 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:57.043381 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:57.043632 ignition[1113]: Ignition finished successfully Jan 17 00:20:57.050748 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:20:57.074593 ignition[1176]: Ignition 2.19.0 Jan 17 00:20:57.074604 ignition[1176]: Stage: fetch Jan 17 00:20:57.074957 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:57.074967 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:57.075052 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:57.082921 ignition[1176]: PUT result: OK Jan 17 00:20:57.084490 ignition[1176]: parsed url from cmdline: "" Jan 17 00:20:57.084501 ignition[1176]: no config URL provided Jan 17 00:20:57.084508 ignition[1176]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:20:57.084520 ignition[1176]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:20:57.084536 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:57.085053 ignition[1176]: PUT result: OK Jan 17 00:20:57.085096 ignition[1176]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 00:20:57.085666 ignition[1176]: GET result: OK Jan 17 00:20:57.085724 ignition[1176]: parsing config with SHA512: d0ccfa50135b6039f81231030ea2ae0b0736392347496769d9a90313a6f4002c0ad5755605b8f35cdf0e2bf162e463e4534eb1a7172bae72731b9aa102b23a33 Jan 17 00:20:57.092423 unknown[1176]: fetched base config from "system" Jan 17 00:20:57.092896 ignition[1176]: fetch: fetch complete Jan 17 00:20:57.092434 unknown[1176]: fetched base config from "system" Jan 17 00:20:57.092900 ignition[1176]: fetch: fetch passed Jan 17 00:20:57.092440 unknown[1176]: fetched user config from "aws" Jan 17 00:20:57.092940 ignition[1176]: Ignition finished successfully Jan 17 00:20:57.095767 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:20:57.101745 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:20:57.116552 ignition[1182]: Ignition 2.19.0 Jan 17 00:20:57.116585 ignition[1182]: Stage: kargs Jan 17 00:20:57.116955 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:57.116964 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:57.117045 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:57.117924 ignition[1182]: PUT result: OK Jan 17 00:20:57.120322 ignition[1182]: kargs: kargs passed Jan 17 00:20:57.120379 ignition[1182]: Ignition finished successfully Jan 17 00:20:57.121749 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:20:57.125780 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:20:57.149888 ignition[1188]: Ignition 2.19.0 Jan 17 00:20:57.149900 ignition[1188]: Stage: disks Jan 17 00:20:57.150268 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:57.150278 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:57.150369 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:57.151449 ignition[1188]: PUT result: OK Jan 17 00:20:57.154291 ignition[1188]: disks: disks passed Jan 17 00:20:57.154350 ignition[1188]: Ignition finished successfully Jan 17 00:20:57.155345 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:20:57.156160 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:20:57.156812 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:20:57.157092 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:20:57.157354 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:20:57.157643 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:20:57.164736 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:20:57.199738 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:20:57.202720 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:20:57.207704 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:20:57.306586 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:20:57.306717 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:20:57.307602 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:20:57.324697 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:20:57.326871 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:20:57.327582 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:20:57.327626 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:20:57.327649 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:20:57.335010 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:20:57.341623 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Jan 17 00:20:57.341597 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:20:57.348399 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:57.348466 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:20:57.348480 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:20:57.362591 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:20:57.365160 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:20:57.737208 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:20:57.766901 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:20:57.771981 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:20:57.776301 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:20:58.095091 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:20:58.101663 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:20:58.105503 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:20:58.111090 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:20:58.113028 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:58.138584 ignition[1328]: INFO : Ignition 2.19.0 Jan 17 00:20:58.138584 ignition[1328]: INFO : Stage: mount Jan 17 00:20:58.140849 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:58.140849 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:58.140849 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:58.144109 ignition[1328]: INFO : PUT result: OK Jan 17 00:20:58.148346 ignition[1328]: INFO : mount: mount passed Jan 17 00:20:58.150111 ignition[1328]: INFO : Ignition finished successfully Jan 17 00:20:58.150383 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:20:58.153741 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:20:58.163741 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:20:58.171202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:20:58.193584 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1341) Jan 17 00:20:58.193646 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:20:58.195673 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:20:58.198098 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:20:58.202579 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:20:58.204831 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:20:58.225729 ignition[1358]: INFO : Ignition 2.19.0 Jan 17 00:20:58.225729 ignition[1358]: INFO : Stage: files Jan 17 00:20:58.227179 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:20:58.227179 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:20:58.227179 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:20:58.228667 ignition[1358]: INFO : PUT result: OK Jan 17 00:20:58.229879 ignition[1358]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:20:58.242372 ignition[1358]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:20:58.242372 ignition[1358]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:20:58.267521 ignition[1358]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:20:58.268349 ignition[1358]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:20:58.268349 ignition[1358]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:20:58.268311 unknown[1358]: wrote ssh authorized keys file for user: core Jan 17 00:20:58.281724 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:20:58.282555 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:20:58.282555 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:20:58.282555 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:20:58.389237 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:20:58.565209 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:20:58.565209 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:20:58.566999 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:20:58.761694 systemd-networkd[1167]: eth0: Gained IPv6LL Jan 17 00:20:58.848724 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 00:20:59.091609 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:20:59.091609 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:20:59.093248 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:20:59.570466 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 00:21:00.535256 ignition[1358]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:21:00.535256 ignition[1358]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 00:21:00.550023 ignition[1358]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:21:00.551127 ignition[1358]: INFO : files: files passed Jan 17 00:21:00.551127 ignition[1358]: INFO : Ignition finished successfully Jan 17 00:21:00.552132 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:21:00.561847 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:21:00.565754 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:21:00.569294 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:21:00.569437 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:21:00.587206 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:00.587206 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:00.588964 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:21:00.590735 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:21:00.591620 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:21:00.596717 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:21:00.620211 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:21:00.620321 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:21:00.621816 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:21:00.622492 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:21:00.623330 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:21:00.629742 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:21:00.642388 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:21:00.648713 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:21:00.657870 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:00.658451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:00.659029 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:21:00.659772 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:21:00.659886 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:21:00.660959 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:21:00.661899 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:21:00.662634 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:21:00.663304 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:21:00.663997 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:21:00.664702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:21:00.665488 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:21:00.666437 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:21:00.667247 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:21:00.668278 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:21:00.669031 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:21:00.669150 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:21:00.670205 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:00.671101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:00.671712 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:21:00.671812 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:00.672463 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:21:00.672599 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:21:00.673694 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:21:00.673814 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:21:00.674744 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:21:00.674848 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:21:00.682832 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:21:00.683315 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:21:00.683493 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:00.686667 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:21:00.687150 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:21:00.687304 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:00.687992 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:21:00.688110 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:21:00.693355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:21:00.693845 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:21:00.700469 ignition[1410]: INFO : Ignition 2.19.0 Jan 17 00:21:00.701659 ignition[1410]: INFO : Stage: umount Jan 17 00:21:00.701659 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:21:00.701659 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:21:00.703641 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:21:00.705070 ignition[1410]: INFO : PUT result: OK Jan 17 00:21:00.709758 ignition[1410]: INFO : umount: umount passed Jan 17 00:21:00.711473 ignition[1410]: INFO : Ignition finished successfully Jan 17 00:21:00.712302 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:21:00.713035 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:21:00.714841 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:21:00.715693 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:21:00.716863 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:21:00.716922 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:21:00.718641 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:21:00.718702 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:21:00.719145 systemd[1]: Stopped target network.target - Network. Jan 17 00:21:00.719471 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:21:00.719517 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:21:00.720360 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:21:00.723202 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:21:00.723279 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:00.723988 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:21:00.724888 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:21:00.726066 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:21:00.726123 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:21:00.726727 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:21:00.726781 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:21:00.727358 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:21:00.727422 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:21:00.728016 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:21:00.728074 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:21:00.729000 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:21:00.729600 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:21:00.731799 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:21:00.733515 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:21:00.733673 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:21:00.734199 systemd-networkd[1167]: eth0: DHCPv6 lease lost Jan 17 00:21:00.736255 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:21:00.736366 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:21:00.738760 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:21:00.738911 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:21:00.741127 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:21:00.741186 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:00.741920 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:21:00.741984 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:21:00.749776 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:21:00.750330 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:21:00.750405 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:21:00.751031 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:21:00.751091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:00.752631 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:21:00.752692 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:00.753631 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:21:00.753687 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:00.754408 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:00.762301 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:21:00.763518 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:00.766308 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:21:00.766402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:00.767189 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:21:00.767226 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:00.767578 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:21:00.767647 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:21:00.768798 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:21:00.768860 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:21:00.770003 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:21:00.770066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:21:00.773518 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:21:00.774484 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:21:00.774571 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:00.776461 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:00.776520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:00.782961 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:21:00.783842 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:21:00.790187 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:21:00.790849 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:21:00.791723 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:21:00.796786 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:21:00.839820 systemd[1]: Switching root. Jan 17 00:21:00.869148 systemd-journald[179]: Journal stopped Jan 17 00:21:03.005875 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 17 00:21:03.005948 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:21:03.005964 kernel: SELinux: policy capability open_perms=1 Jan 17 00:21:03.005980 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:21:03.005992 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:21:03.006003 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:21:03.006020 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:21:03.006032 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:21:03.006043 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:21:03.006062 kernel: audit: type=1403 audit(1768609261.933:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:21:03.006080 systemd[1]: Successfully loaded SELinux policy in 65.075ms. Jan 17 00:21:03.006105 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.541ms. Jan 17 00:21:03.006119 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:21:03.006131 systemd[1]: Detected virtualization amazon. Jan 17 00:21:03.006144 systemd[1]: Detected architecture x86-64. Jan 17 00:21:03.006156 systemd[1]: Detected first boot. Jan 17 00:21:03.006169 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:21:03.006181 zram_generator::config[1469]: No configuration found. Jan 17 00:21:03.006198 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:21:03.006215 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:21:03.006227 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:21:03.006240 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:21:03.006253 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:21:03.006266 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:21:03.006278 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:21:03.006291 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:21:03.006307 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:21:03.006319 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:21:03.006331 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:21:03.006343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:21:03.006356 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:21:03.006369 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:21:03.006382 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:21:03.006395 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:21:03.006411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:21:03.006423 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:21:03.006435 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:21:03.006448 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:21:03.006461 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:21:03.006473 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:21:03.006486 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:21:03.006498 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:21:03.006510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:21:03.006525 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:21:03.006538 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:21:03.006550 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:21:03.006601 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:21:03.006614 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:21:03.006626 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:21:03.006638 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:21:03.006651 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:21:03.006663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:21:03.006680 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:21:03.006694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:03.006706 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:21:03.006718 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:21:03.006730 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:21:03.006742 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:21:03.006754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:03.006769 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:21:03.006784 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:21:03.006796 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:03.006809 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:21:03.006822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:03.006834 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:21:03.006846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:03.006859 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:21:03.006871 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:21:03.006885 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:21:03.006900 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:21:03.006912 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:21:03.006924 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:21:03.006936 kernel: loop: module loaded Jan 17 00:21:03.006948 kernel: fuse: init (API version 7.39) Jan 17 00:21:03.006961 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:21:03.006999 systemd-journald[1576]: Collecting audit messages is disabled. Jan 17 00:21:03.007026 systemd-journald[1576]: Journal started Jan 17 00:21:03.007050 systemd-journald[1576]: Runtime Journal (/run/log/journal/ec2b7a1620720c42a2712fa2378a10d4) is 4.7M, max 38.2M, 33.4M free. Jan 17 00:21:03.020349 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:21:03.020419 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:03.027639 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:21:03.030550 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:21:03.031738 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:21:03.032386 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:21:03.032895 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:21:03.034800 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:21:03.035694 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:21:03.037934 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:21:03.038921 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:21:03.040253 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:21:03.040490 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:21:03.041220 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:03.041450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:03.042112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:03.042320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:03.042982 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:21:03.043188 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:21:03.044035 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:03.044238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:03.045129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:21:03.045949 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:21:03.056430 kernel: ACPI: bus type drm_connector registered Jan 17 00:21:03.053980 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:21:03.054169 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:21:03.061546 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:21:03.072695 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:21:03.075658 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:21:03.076225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:21:03.078382 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:21:03.088126 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:21:03.092335 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:21:03.098579 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:21:03.101701 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:21:03.106972 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:21:03.115774 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:21:03.118377 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:21:03.118850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:21:03.123030 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:21:03.140733 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:21:03.141283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:21:03.147435 systemd-journald[1576]: Time spent on flushing to /var/log/journal/ec2b7a1620720c42a2712fa2378a10d4 is 43.720ms for 976 entries. Jan 17 00:21:03.147435 systemd-journald[1576]: System Journal (/var/log/journal/ec2b7a1620720c42a2712fa2378a10d4) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:21:03.199203 systemd-journald[1576]: Received client request to flush runtime journal. Jan 17 00:21:03.148407 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:21:03.155838 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:21:03.175168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:21:03.196174 udevadm[1631]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:21:03.198323 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Jan 17 00:21:03.198338 systemd-tmpfiles[1616]: ACLs are not supported, ignoring. Jan 17 00:21:03.202512 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:21:03.208218 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:21:03.218789 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:21:03.257083 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:21:03.264792 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:21:03.281799 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Jan 17 00:21:03.282123 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Jan 17 00:21:03.286790 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:21:03.730192 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:21:03.737936 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:21:03.764350 systemd-udevd[1649]: Using default interface naming scheme 'v255'. Jan 17 00:21:03.843199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:21:03.851715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:21:03.884709 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:21:03.896448 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:21:03.916703 (udev-worker)[1656]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:21:03.958021 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:21:03.965591 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:21:03.977630 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 00:21:03.989078 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:21:03.994597 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:21:04.003587 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 17 00:21:04.034061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:04.040921 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 00:21:04.049825 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:21:04.060612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:21:04.060844 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:04.066404 systemd-networkd[1652]: lo: Link UP Jan 17 00:21:04.069748 systemd-networkd[1652]: lo: Gained carrier Jan 17 00:21:04.070883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:21:04.074068 systemd-networkd[1652]: Enumeration completed Jan 17 00:21:04.074426 systemd-networkd[1652]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:04.074430 systemd-networkd[1652]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:21:04.079901 systemd-networkd[1652]: eth0: Link UP Jan 17 00:21:04.080864 systemd-networkd[1652]: eth0: Gained carrier Jan 17 00:21:04.081006 systemd-networkd[1652]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:21:04.085585 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1654) Jan 17 00:21:04.085732 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:21:04.095704 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:21:04.096622 systemd-networkd[1652]: eth0: DHCPv4 address 172.31.17.137/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:21:04.193609 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:21:04.194395 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:21:04.200736 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:21:04.229011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:21:04.245316 lvm[1774]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:21:04.271507 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:21:04.272715 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:21:04.275895 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:21:04.284260 lvm[1780]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:21:04.313616 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:21:04.314711 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:21:04.315151 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:21:04.315177 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:21:04.315758 systemd[1]: Reached target machines.target - Containers. Jan 17 00:21:04.317432 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:21:04.321729 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:21:04.323858 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:21:04.324447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:04.331764 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:21:04.334710 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:21:04.337264 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:21:04.339290 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:21:04.344991 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:21:04.365925 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:21:04.367949 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:21:04.372595 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:21:04.496589 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:21:04.524585 kernel: loop1: detected capacity change from 0 to 61336 Jan 17 00:21:04.653608 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 00:21:04.763779 kernel: loop3: detected capacity change from 0 to 224512 Jan 17 00:21:05.049585 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:21:05.074585 kernel: loop5: detected capacity change from 0 to 61336 Jan 17 00:21:05.093585 kernel: loop6: detected capacity change from 0 to 142488 Jan 17 00:21:05.111584 kernel: loop7: detected capacity change from 0 to 224512 Jan 17 00:21:05.134491 (sd-merge)[1801]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:21:05.135216 (sd-merge)[1801]: Merged extensions into '/usr'. Jan 17 00:21:05.139343 systemd[1]: Reloading requested from client PID 1788 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:21:05.139361 systemd[1]: Reloading... Jan 17 00:21:05.208589 zram_generator::config[1829]: No configuration found. Jan 17 00:21:05.225985 systemd-networkd[1652]: eth0: Gained IPv6LL Jan 17 00:21:05.415501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:05.495440 systemd[1]: Reloading finished in 355 ms. Jan 17 00:21:05.513198 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:21:05.514063 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:21:05.524773 systemd[1]: Starting ensure-sysext.service... Jan 17 00:21:05.528730 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:21:05.532426 systemd[1]: Reloading requested from client PID 1888 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:21:05.532443 systemd[1]: Reloading... Jan 17 00:21:05.577731 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:21:05.578310 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:21:05.579777 systemd-tmpfiles[1889]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:21:05.581248 systemd-tmpfiles[1889]: ACLs are not supported, ignoring. Jan 17 00:21:05.581357 systemd-tmpfiles[1889]: ACLs are not supported, ignoring. Jan 17 00:21:05.589516 systemd-tmpfiles[1889]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:21:05.589535 systemd-tmpfiles[1889]: Skipping /boot Jan 17 00:21:05.598614 zram_generator::config[1913]: No configuration found. Jan 17 00:21:05.617447 systemd-tmpfiles[1889]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:21:05.617465 systemd-tmpfiles[1889]: Skipping /boot Jan 17 00:21:05.793029 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:05.877084 systemd[1]: Reloading finished in 344 ms. Jan 17 00:21:05.879396 ldconfig[1784]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:21:05.891448 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:21:05.901230 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:21:05.907812 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:05.912710 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:21:05.914779 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:21:05.921456 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:21:05.933275 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:21:05.941098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:05.941296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:05.948863 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:05.951965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:05.962763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:05.965145 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:05.965693 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:05.966502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:05.967941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:05.977816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:05.977982 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:05.984358 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:05.985685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:05.993889 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:21:06.000580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:06.000901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:21:06.012163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:21:06.022748 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:21:06.024700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:21:06.029707 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:21:06.030471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:21:06.031165 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:21:06.033073 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:21:06.034096 systemd[1]: Finished ensure-sysext.service. Jan 17 00:21:06.042920 augenrules[2017]: No rules Jan 17 00:21:06.046756 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:06.047541 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:21:06.048346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:21:06.048576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:21:06.049464 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:21:06.049702 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:21:06.050270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:21:06.050408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:21:06.054930 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:21:06.055328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:21:06.062648 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:21:06.062727 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:21:06.064165 systemd-resolved[1983]: Positive Trust Anchors: Jan 17 00:21:06.064444 systemd-resolved[1983]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:21:06.064521 systemd-resolved[1983]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:21:06.070789 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:21:06.080673 systemd-resolved[1983]: Defaulting to hostname 'linux'. Jan 17 00:21:06.081821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:21:06.084846 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:21:06.085409 systemd[1]: Reached target network.target - Network. Jan 17 00:21:06.085813 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:21:06.086134 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:21:06.106316 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:21:06.106942 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:21:06.106984 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:21:06.107520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:21:06.107928 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:21:06.108417 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:21:06.108847 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:21:06.109168 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:21:06.109532 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:21:06.109694 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:21:06.110008 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:21:06.110935 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:21:06.112733 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:21:06.114769 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:21:06.117634 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:21:06.117971 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:21:06.118257 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:21:06.118715 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:21:06.118753 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:21:06.118774 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:21:06.121094 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:21:06.131890 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:21:06.135735 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:21:06.140345 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:21:06.143723 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:21:06.144800 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:21:06.151673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:06.153194 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:21:06.159585 jq[2047]: false Jan 17 00:21:06.165914 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:21:06.171546 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:21:06.181327 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:21:06.200660 extend-filesystems[2048]: Found loop4 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found loop5 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found loop6 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found loop7 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found nvme0n1 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found nvme0n1p1 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found nvme0n1p2 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found nvme0n1p3 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found usr Jan 17 00:21:06.207678 extend-filesystems[2048]: Found nvme0n1p4 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found nvme0n1p6 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found nvme0n1p7 Jan 17 00:21:06.207678 extend-filesystems[2048]: Found nvme0n1p9 Jan 17 00:21:06.207678 extend-filesystems[2048]: Checking size of /dev/nvme0n1p9 Jan 17 00:21:06.208723 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:21:06.213362 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:21:06.224224 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:21:06.246396 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:21:06.247643 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:21:06.255140 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:21:06.264920 extend-filesystems[2048]: Resized partition /dev/nvme0n1p9 Jan 17 00:21:06.267359 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:21:06.268630 dbus-daemon[2046]: [system] SELinux support is enabled Jan 17 00:21:06.271221 dbus-daemon[2046]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1652 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:21:06.271340 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:21:06.272848 coreos-metadata[2044]: Jan 17 00:21:06.270 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:21:06.279649 extend-filesystems[2084]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:21:06.288715 coreos-metadata[2044]: Jan 17 00:21:06.280 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:21:06.288715 coreos-metadata[2044]: Jan 17 00:21:06.282 INFO Fetch successful Jan 17 00:21:06.288715 coreos-metadata[2044]: Jan 17 00:21:06.282 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:21:06.282552 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:21:06.282803 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:21:06.288935 jq[2081]: true Jan 17 00:21:06.284358 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:21:06.284607 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:21:06.292858 coreos-metadata[2044]: Jan 17 00:21:06.290 INFO Fetch successful Jan 17 00:21:06.292858 coreos-metadata[2044]: Jan 17 00:21:06.290 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:21:06.300282 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:21:06.300359 coreos-metadata[2044]: Jan 17 00:21:06.293 INFO Fetch successful Jan 17 00:21:06.300359 coreos-metadata[2044]: Jan 17 00:21:06.298 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:21:06.301188 coreos-metadata[2044]: Jan 17 00:21:06.301 INFO Fetch successful Jan 17 00:21:06.301188 coreos-metadata[2044]: Jan 17 00:21:06.301 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:21:06.301881 coreos-metadata[2044]: Jan 17 00:21:06.301 INFO Fetch failed with 404: resource not found Jan 17 00:21:06.301881 coreos-metadata[2044]: Jan 17 00:21:06.301 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:21:06.302397 coreos-metadata[2044]: Jan 17 00:21:06.302 INFO Fetch successful Jan 17 00:21:06.302397 coreos-metadata[2044]: Jan 17 00:21:06.302 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:21:06.302919 coreos-metadata[2044]: Jan 17 00:21:06.302 INFO Fetch successful Jan 17 00:21:06.302919 coreos-metadata[2044]: Jan 17 00:21:06.302 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:21:06.303405 coreos-metadata[2044]: Jan 17 00:21:06.303 INFO Fetch successful Jan 17 00:21:06.303405 coreos-metadata[2044]: Jan 17 00:21:06.303 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:21:06.304883 coreos-metadata[2044]: Jan 17 00:21:06.304 INFO Fetch successful Jan 17 00:21:06.304883 coreos-metadata[2044]: Jan 17 00:21:06.304 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:21:06.305620 coreos-metadata[2044]: Jan 17 00:21:06.305 INFO Fetch successful Jan 17 00:21:06.313457 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:21:06.313716 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:21:06.318770 update_engine[2079]: I20260117 00:21:06.318675 2079 main.cc:92] Flatcar Update Engine starting Jan 17 00:21:06.324629 update_engine[2079]: I20260117 00:21:06.320212 2079 update_check_scheduler.cc:74] Next update check in 5m18s Jan 17 00:21:06.337687 ntpd[2052]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:12 UTC 2026 (1): Starting Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: ---------------------------------------------------- Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: corporation. Support and training for ntp-4 are Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: available at https://www.nwtime.org/support Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: ---------------------------------------------------- Jan 17 00:21:06.344764 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: proto: precision = 0.056 usec (-24) Jan 17 00:21:06.337709 ntpd[2052]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:21:06.337717 ntpd[2052]: ---------------------------------------------------- Jan 17 00:21:06.352854 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: basedate set to 2026-01-04 Jan 17 00:21:06.352854 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: gps base set to 2026-01-04 (week 2400) Jan 17 00:21:06.352921 jq[2095]: true Jan 17 00:21:06.351547 (ntainerd)[2097]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:21:06.337725 ntpd[2052]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:21:06.337732 ntpd[2052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:21:06.337739 ntpd[2052]: corporation. Support and training for ntp-4 are Jan 17 00:21:06.337746 ntpd[2052]: available at https://www.nwtime.org/support Jan 17 00:21:06.337753 ntpd[2052]: ---------------------------------------------------- Jan 17 00:21:06.343621 ntpd[2052]: proto: precision = 0.056 usec (-24) Jan 17 00:21:06.351758 ntpd[2052]: basedate set to 2026-01-04 Jan 17 00:21:06.351777 ntpd[2052]: gps base set to 2026-01-04 (week 2400) Jan 17 00:21:06.362975 ntpd[2052]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:21:06.363587 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:21:06.363587 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:21:06.363587 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:21:06.363587 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Listen normally on 3 eth0 172.31.17.137:123 Jan 17 00:21:06.363587 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Listen normally on 4 lo [::1]:123 Jan 17 00:21:06.363587 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Listen normally on 5 eth0 [fe80::4b0:ecff:fedf:d81f%2]:123 Jan 17 00:21:06.363587 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: Listening on routing socket on fd #22 for interface updates Jan 17 00:21:06.363024 ntpd[2052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:21:06.363173 ntpd[2052]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:21:06.363199 ntpd[2052]: Listen normally on 3 eth0 172.31.17.137:123 Jan 17 00:21:06.363231 ntpd[2052]: Listen normally on 4 lo [::1]:123 Jan 17 00:21:06.363260 ntpd[2052]: Listen normally on 5 eth0 [fe80::4b0:ecff:fedf:d81f%2]:123 Jan 17 00:21:06.363287 ntpd[2052]: Listening on routing socket on fd #22 for interface updates Jan 17 00:21:06.371370 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:21:06.371411 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:21:06.371863 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:21:06.371880 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:21:06.376419 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:21:06.377487 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:21:06.378836 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:06.378836 ntpd[2052]: 17 Jan 00:21:06 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:06.377970 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:06.377998 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:21:06.380930 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:21:06.393791 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:21:06.395099 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:21:06.421299 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1658) Jan 17 00:21:06.427636 tar[2091]: linux-amd64/LICENSE Jan 17 00:21:06.433450 tar[2091]: linux-amd64/helm Jan 17 00:21:06.453178 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:21:06.455056 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:21:06.459908 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:21:06.481547 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:21:06.482459 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:21:06.544822 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:21:06.546983 systemd-logind[2077]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:21:06.547003 systemd-logind[2077]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 00:21:06.547023 systemd-logind[2077]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:21:06.552782 systemd-logind[2077]: New seat seat0. Jan 17 00:21:06.559586 bash[2179]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:21:06.561321 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:21:06.563299 extend-filesystems[2084]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:21:06.563299 extend-filesystems[2084]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:21:06.563299 extend-filesystems[2084]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:21:06.577107 extend-filesystems[2048]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:21:06.570554 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:21:06.570806 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:21:06.583935 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:21:06.601908 systemd[1]: Starting sshkeys.service... Jan 17 00:21:06.618010 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:21:06.626690 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:21:06.674172 amazon-ssm-agent[2166]: Initializing new seelog logger Jan 17 00:21:06.686445 amazon-ssm-agent[2166]: New Seelog Logger Creation Complete Jan 17 00:21:06.686445 amazon-ssm-agent[2166]: 2026/01/17 00:21:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:06.686445 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:06.686445 amazon-ssm-agent[2166]: 2026/01/17 00:21:06 processing appconfig overrides Jan 17 00:21:06.699445 amazon-ssm-agent[2166]: 2026-01-17 00:21:06 INFO Proxy environment variables: Jan 17 00:21:06.699445 amazon-ssm-agent[2166]: 2026/01/17 00:21:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:06.699445 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:06.699445 amazon-ssm-agent[2166]: 2026/01/17 00:21:06 processing appconfig overrides Jan 17 00:21:06.699445 amazon-ssm-agent[2166]: 2026/01/17 00:21:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:06.699445 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:06.699445 amazon-ssm-agent[2166]: 2026/01/17 00:21:06 processing appconfig overrides Jan 17 00:21:06.716816 amazon-ssm-agent[2166]: 2026/01/17 00:21:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:06.717930 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:21:06.717930 amazon-ssm-agent[2166]: 2026/01/17 00:21:06 processing appconfig overrides Jan 17 00:21:06.778901 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:21:06.779225 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:21:06.787030 dbus-daemon[2046]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2156 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:21:06.799657 amazon-ssm-agent[2166]: 2026-01-17 00:21:06 INFO https_proxy: Jan 17 00:21:06.801919 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:21:06.849583 coreos-metadata[2228]: Jan 17 00:21:06.847 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:21:06.848521 locksmithd[2129]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:21:06.851600 coreos-metadata[2228]: Jan 17 00:21:06.851 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:21:06.854313 coreos-metadata[2228]: Jan 17 00:21:06.854 INFO Fetch successful Jan 17 00:21:06.854313 coreos-metadata[2228]: Jan 17 00:21:06.854 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:21:06.856583 coreos-metadata[2228]: Jan 17 00:21:06.855 INFO Fetch successful Jan 17 00:21:06.865723 unknown[2228]: wrote ssh authorized keys file for user: core Jan 17 00:21:06.904666 amazon-ssm-agent[2166]: 2026-01-17 00:21:06 INFO http_proxy: Jan 17 00:21:06.929474 polkitd[2252]: Started polkitd version 121 Jan 17 00:21:07.013005 amazon-ssm-agent[2166]: 2026-01-17 00:21:06 INFO no_proxy: Jan 17 00:21:07.112810 polkitd[2252]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:21:07.113214 polkitd[2252]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:21:07.115871 polkitd[2252]: Finished loading, compiling and executing 2 rules Jan 17 00:21:07.117185 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:21:07.117426 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:21:07.119104 polkitd[2252]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:21:07.139921 sshd_keygen[2101]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:21:07.170642 update-ssh-keys[2262]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:21:07.175307 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:21:07.184369 systemd[1]: Finished sshkeys.service. Jan 17 00:21:07.201458 systemd-hostnamed[2156]: Hostname set to (transient) Jan 17 00:21:07.201580 systemd-resolved[1983]: System hostname changed to 'ip-172-31-17-137'. Jan 17 00:21:07.206208 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:21:07.206762 amazon-ssm-agent[2166]: 2026-01-17 00:21:06 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:21:07.218887 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:21:07.240992 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:21:07.241238 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:21:07.251896 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:21:07.271582 containerd[2097]: time="2026-01-17T00:21:07.270879966Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:21:07.304216 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:21:07.307685 amazon-ssm-agent[2166]: 2026-01-17 00:21:06 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:21:07.309974 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:21:07.321386 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:21:07.325359 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:21:07.337655 containerd[2097]: time="2026-01-17T00:21:07.337603408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:07.341274 containerd[2097]: time="2026-01-17T00:21:07.341229610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:07.341274 containerd[2097]: time="2026-01-17T00:21:07.341270175Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:21:07.341393 containerd[2097]: time="2026-01-17T00:21:07.341287718Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:21:07.341460 containerd[2097]: time="2026-01-17T00:21:07.341445071Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:21:07.341503 containerd[2097]: time="2026-01-17T00:21:07.341466486Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:07.341533 containerd[2097]: time="2026-01-17T00:21:07.341518825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:07.341555 containerd[2097]: time="2026-01-17T00:21:07.341534352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:07.343081 containerd[2097]: time="2026-01-17T00:21:07.343047343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:07.343114 containerd[2097]: time="2026-01-17T00:21:07.343082251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:07.343114 containerd[2097]: time="2026-01-17T00:21:07.343098627Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:07.343114 containerd[2097]: time="2026-01-17T00:21:07.343108515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:07.343597 containerd[2097]: time="2026-01-17T00:21:07.343197943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:07.343597 containerd[2097]: time="2026-01-17T00:21:07.343399797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:21:07.343597 containerd[2097]: time="2026-01-17T00:21:07.343540662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:21:07.343597 containerd[2097]: time="2026-01-17T00:21:07.343553481Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:21:07.343692 containerd[2097]: time="2026-01-17T00:21:07.343654379Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:21:07.343715 containerd[2097]: time="2026-01-17T00:21:07.343693307Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:21:07.371732 containerd[2097]: time="2026-01-17T00:21:07.371646026Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:21:07.371732 containerd[2097]: time="2026-01-17T00:21:07.371708182Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:21:07.371732 containerd[2097]: time="2026-01-17T00:21:07.371727508Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:21:07.371864 containerd[2097]: time="2026-01-17T00:21:07.371742729Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:21:07.371864 containerd[2097]: time="2026-01-17T00:21:07.371757989Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:21:07.371930 containerd[2097]: time="2026-01-17T00:21:07.371911806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:21:07.372246 containerd[2097]: time="2026-01-17T00:21:07.372222707Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:21:07.372348 containerd[2097]: time="2026-01-17T00:21:07.372331559Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:21:07.372374 containerd[2097]: time="2026-01-17T00:21:07.372350293Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:21:07.372374 containerd[2097]: time="2026-01-17T00:21:07.372364681Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:21:07.372426 containerd[2097]: time="2026-01-17T00:21:07.372377493Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:21:07.372426 containerd[2097]: time="2026-01-17T00:21:07.372390507Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:21:07.372426 containerd[2097]: time="2026-01-17T00:21:07.372403306Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:21:07.372426 containerd[2097]: time="2026-01-17T00:21:07.372422806Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:21:07.372510 containerd[2097]: time="2026-01-17T00:21:07.372437374Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:21:07.372510 containerd[2097]: time="2026-01-17T00:21:07.372459122Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:21:07.372510 containerd[2097]: time="2026-01-17T00:21:07.372472477Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:21:07.372510 containerd[2097]: time="2026-01-17T00:21:07.372484126Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:21:07.372510 containerd[2097]: time="2026-01-17T00:21:07.372503132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372636 containerd[2097]: time="2026-01-17T00:21:07.372515742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372636 containerd[2097]: time="2026-01-17T00:21:07.372527723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372636 containerd[2097]: time="2026-01-17T00:21:07.372540863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372636 containerd[2097]: time="2026-01-17T00:21:07.372552157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372636 containerd[2097]: time="2026-01-17T00:21:07.372588170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372636 containerd[2097]: time="2026-01-17T00:21:07.372599621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372636 containerd[2097]: time="2026-01-17T00:21:07.372611497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372636 containerd[2097]: time="2026-01-17T00:21:07.372630942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372644609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372656636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372667708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372679708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372694428Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372713541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372724109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372733457Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372770254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372786285Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372796835Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372807630Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:21:07.372829 containerd[2097]: time="2026-01-17T00:21:07.372819102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.373111 containerd[2097]: time="2026-01-17T00:21:07.372831220Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:21:07.373111 containerd[2097]: time="2026-01-17T00:21:07.372840924Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:21:07.373111 containerd[2097]: time="2026-01-17T00:21:07.372850083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:21:07.373180 containerd[2097]: time="2026-01-17T00:21:07.373119914Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:21:07.373180 containerd[2097]: time="2026-01-17T00:21:07.373171300Z" level=info msg="Connect containerd service" Jan 17 00:21:07.373452 containerd[2097]: time="2026-01-17T00:21:07.373210245Z" level=info msg="using legacy CRI server" Jan 17 00:21:07.373452 containerd[2097]: time="2026-01-17T00:21:07.373217870Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:21:07.373452 containerd[2097]: time="2026-01-17T00:21:07.373341945Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:21:07.376925 containerd[2097]: time="2026-01-17T00:21:07.374663368Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:21:07.376925 containerd[2097]: time="2026-01-17T00:21:07.375685818Z" level=info msg="Start subscribing containerd event" Jan 17 00:21:07.376925 containerd[2097]: time="2026-01-17T00:21:07.375751742Z" level=info msg="Start recovering state" Jan 17 00:21:07.376925 containerd[2097]: time="2026-01-17T00:21:07.375822623Z" level=info msg="Start event monitor" Jan 17 00:21:07.376925 containerd[2097]: time="2026-01-17T00:21:07.375838622Z" level=info msg="Start snapshots syncer" Jan 17 00:21:07.376925 containerd[2097]: time="2026-01-17T00:21:07.375847037Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:21:07.376925 containerd[2097]: time="2026-01-17T00:21:07.375854853Z" level=info msg="Start streaming server" Jan 17 00:21:07.380230 containerd[2097]: time="2026-01-17T00:21:07.380197822Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:21:07.380606 containerd[2097]: time="2026-01-17T00:21:07.380264920Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:21:07.380606 containerd[2097]: time="2026-01-17T00:21:07.380319809Z" level=info msg="containerd successfully booted in 0.112385s" Jan 17 00:21:07.380453 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:21:07.407228 amazon-ssm-agent[2166]: 2026-01-17 00:21:06 INFO Agent will take identity from EC2 Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [Registrar] Starting registrar module Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:21:07.455329 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:21:07.507665 amazon-ssm-agent[2166]: 2026-01-17 00:21:07 INFO [CredentialRefresher] Next credential rotation will be in 30.34165998895 minutes Jan 17 00:21:07.649885 tar[2091]: linux-amd64/README.md Jan 17 00:21:07.665614 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:21:08.114988 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:21:08.121687 systemd[1]: Started sshd@0-172.31.17.137:22-4.153.228.146:50688.service - OpenSSH per-connection server daemon (4.153.228.146:50688). Jan 17 00:21:08.483801 amazon-ssm-agent[2166]: 2026-01-17 00:21:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:21:08.586251 amazon-ssm-agent[2166]: 2026-01-17 00:21:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2320) started Jan 17 00:21:08.669212 sshd[2317]: Accepted publickey for core from 4.153.228.146 port 50688 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:08.672581 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:08.682731 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:21:08.684767 amazon-ssm-agent[2166]: 2026-01-17 00:21:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:21:08.687847 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:21:08.691697 systemd-logind[2077]: New session 1 of user core. Jan 17 00:21:08.703945 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:21:08.713089 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:21:08.717083 (systemd)[2334]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:21:08.834277 systemd[2334]: Queued start job for default target default.target. Jan 17 00:21:08.835121 systemd[2334]: Created slice app.slice - User Application Slice. Jan 17 00:21:08.835148 systemd[2334]: Reached target paths.target - Paths. Jan 17 00:21:08.835161 systemd[2334]: Reached target timers.target - Timers. Jan 17 00:21:08.840144 systemd[2334]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:21:08.850055 systemd[2334]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:21:08.850130 systemd[2334]: Reached target sockets.target - Sockets. Jan 17 00:21:08.850145 systemd[2334]: Reached target basic.target - Basic System. Jan 17 00:21:08.850189 systemd[2334]: Reached target default.target - Main User Target. Jan 17 00:21:08.850217 systemd[2334]: Startup finished in 125ms. Jan 17 00:21:08.850460 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:21:08.857928 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:21:09.246009 systemd[1]: Started sshd@1-172.31.17.137:22-4.153.228.146:50698.service - OpenSSH per-connection server daemon (4.153.228.146:50698). Jan 17 00:21:09.767456 sshd[2346]: Accepted publickey for core from 4.153.228.146 port 50698 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:09.768994 sshd[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:09.773712 systemd-logind[2077]: New session 2 of user core. Jan 17 00:21:09.779939 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:21:10.147320 sshd[2346]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:10.150776 systemd[1]: sshd@1-172.31.17.137:22-4.153.228.146:50698.service: Deactivated successfully. Jan 17 00:21:10.153668 systemd-logind[2077]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:21:10.154161 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:21:10.155167 systemd-logind[2077]: Removed session 2. Jan 17 00:21:10.237426 systemd[1]: Started sshd@2-172.31.17.137:22-4.153.228.146:50712.service - OpenSSH per-connection server daemon (4.153.228.146:50712). Jan 17 00:21:10.626724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:10.626942 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:10.627767 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:21:10.628346 systemd[1]: Startup finished in 8.798s (kernel) + 8.758s (userspace) = 17.556s. Jan 17 00:21:10.754775 sshd[2354]: Accepted publickey for core from 4.153.228.146 port 50712 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:10.756722 sshd[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:10.762291 systemd-logind[2077]: New session 3 of user core. Jan 17 00:21:10.772987 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:21:11.128873 sshd[2354]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:11.132129 systemd[1]: sshd@2-172.31.17.137:22-4.153.228.146:50712.service: Deactivated successfully. Jan 17 00:21:11.134657 systemd-logind[2077]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:21:11.135605 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:21:11.136864 systemd-logind[2077]: Removed session 3. Jan 17 00:21:11.784281 kubelet[2364]: E0117 00:21:11.784185 2364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:11.786894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:11.787111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:21.209046 systemd[1]: Started sshd@3-172.31.17.137:22-4.153.228.146:53982.service - OpenSSH per-connection server daemon (4.153.228.146:53982). Jan 17 00:21:21.684718 sshd[2382]: Accepted publickey for core from 4.153.228.146 port 53982 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:21.686213 sshd[2382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:21.691370 systemd-logind[2077]: New session 4 of user core. Jan 17 00:21:21.696895 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:21:21.958376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:21:21.963785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:22.036127 sshd[2382]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:22.039609 systemd-logind[2077]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:21:22.040279 systemd[1]: sshd@3-172.31.17.137:22-4.153.228.146:53982.service: Deactivated successfully. Jan 17 00:21:22.044192 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:21:22.045682 systemd-logind[2077]: Removed session 4. Jan 17 00:21:22.120003 systemd[1]: Started sshd@4-172.31.17.137:22-4.153.228.146:53990.service - OpenSSH per-connection server daemon (4.153.228.146:53990). Jan 17 00:21:22.602816 sshd[2394]: Accepted publickey for core from 4.153.228.146 port 53990 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:22.604296 sshd[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:22.608880 systemd-logind[2077]: New session 5 of user core. Jan 17 00:21:22.617895 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:21:22.922819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:22.928713 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:22.952826 sshd[2394]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:22.957848 systemd-logind[2077]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:21:22.958962 systemd[1]: sshd@4-172.31.17.137:22-4.153.228.146:53990.service: Deactivated successfully. Jan 17 00:21:22.965527 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:21:22.966708 systemd-logind[2077]: Removed session 5. Jan 17 00:21:22.982622 kubelet[2407]: E0117 00:21:22.982536 2407 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:22.986888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:22.987211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:23.035077 systemd[1]: Started sshd@5-172.31.17.137:22-4.153.228.146:53996.service - OpenSSH per-connection server daemon (4.153.228.146:53996). Jan 17 00:21:23.521642 sshd[2418]: Accepted publickey for core from 4.153.228.146 port 53996 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:23.522975 sshd[2418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:23.528111 systemd-logind[2077]: New session 6 of user core. Jan 17 00:21:23.533928 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:21:23.877299 sshd[2418]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:23.881656 systemd[1]: sshd@5-172.31.17.137:22-4.153.228.146:53996.service: Deactivated successfully. Jan 17 00:21:23.886028 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:21:23.886730 systemd-logind[2077]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:21:23.887872 systemd-logind[2077]: Removed session 6. Jan 17 00:21:23.970909 systemd[1]: Started sshd@6-172.31.17.137:22-4.153.228.146:54000.service - OpenSSH per-connection server daemon (4.153.228.146:54000). Jan 17 00:21:24.455086 sshd[2426]: Accepted publickey for core from 4.153.228.146 port 54000 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:24.456724 sshd[2426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:24.462211 systemd-logind[2077]: New session 7 of user core. Jan 17 00:21:24.467978 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:21:24.769497 sudo[2430]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:21:24.769815 sudo[2430]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:24.785318 sudo[2430]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:24.863671 sshd[2426]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:24.867471 systemd[1]: sshd@6-172.31.17.137:22-4.153.228.146:54000.service: Deactivated successfully. Jan 17 00:21:24.872305 systemd-logind[2077]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:21:24.873314 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:21:24.874526 systemd-logind[2077]: Removed session 7. Jan 17 00:21:24.958879 systemd[1]: Started sshd@7-172.31.17.137:22-4.153.228.146:56352.service - OpenSSH per-connection server daemon (4.153.228.146:56352). Jan 17 00:21:25.482246 sshd[2435]: Accepted publickey for core from 4.153.228.146 port 56352 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:25.483762 sshd[2435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:25.488430 systemd-logind[2077]: New session 8 of user core. Jan 17 00:21:25.490828 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:21:25.777910 sudo[2440]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:21:25.778185 sudo[2440]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:25.781689 sudo[2440]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:25.786813 sudo[2439]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:21:25.787085 sudo[2439]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:25.801818 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:25.803256 auditctl[2443]: No rules Jan 17 00:21:25.803668 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:21:25.803893 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:25.807894 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:21:25.837712 augenrules[2462]: No rules Jan 17 00:21:25.838978 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:21:25.840313 sudo[2439]: pam_unix(sudo:session): session closed for user root Jan 17 00:21:25.925268 sshd[2435]: pam_unix(sshd:session): session closed for user core Jan 17 00:21:25.928199 systemd[1]: sshd@7-172.31.17.137:22-4.153.228.146:56352.service: Deactivated successfully. Jan 17 00:21:25.931406 systemd-logind[2077]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:21:25.932329 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:21:25.933341 systemd-logind[2077]: Removed session 8. Jan 17 00:21:26.001853 systemd[1]: Started sshd@8-172.31.17.137:22-4.153.228.146:56354.service - OpenSSH per-connection server daemon (4.153.228.146:56354). Jan 17 00:21:26.488224 sshd[2471]: Accepted publickey for core from 4.153.228.146 port 56354 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:21:26.489648 sshd[2471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:21:26.494965 systemd-logind[2077]: New session 9 of user core. Jan 17 00:21:26.500873 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:21:26.765693 sudo[2475]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:21:26.765991 sudo[2475]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:21:27.339900 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:21:27.340110 (dockerd)[2490]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:21:27.971427 dockerd[2490]: time="2026-01-17T00:21:27.971369357Z" level=info msg="Starting up" Jan 17 00:21:28.372004 dockerd[2490]: time="2026-01-17T00:21:28.371955433Z" level=info msg="Loading containers: start." Jan 17 00:21:28.496587 kernel: Initializing XFRM netlink socket Jan 17 00:21:28.536079 (udev-worker)[2557]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:21:28.598650 systemd-networkd[1652]: docker0: Link UP Jan 17 00:21:28.620360 dockerd[2490]: time="2026-01-17T00:21:28.620306439Z" level=info msg="Loading containers: done." Jan 17 00:21:28.648861 dockerd[2490]: time="2026-01-17T00:21:28.648745760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:21:28.648861 dockerd[2490]: time="2026-01-17T00:21:28.648858847Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:21:28.649036 dockerd[2490]: time="2026-01-17T00:21:28.648959278Z" level=info msg="Daemon has completed initialization" Jan 17 00:21:28.684258 dockerd[2490]: time="2026-01-17T00:21:28.684126079Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:21:28.684682 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:21:30.294077 containerd[2097]: time="2026-01-17T00:21:30.294027598Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:21:30.821745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069402431.mount: Deactivated successfully. Jan 17 00:21:32.738180 containerd[2097]: time="2026-01-17T00:21:32.738124489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:32.739637 containerd[2097]: time="2026-01-17T00:21:32.739587541Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 17 00:21:32.741022 containerd[2097]: time="2026-01-17T00:21:32.740629887Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:32.743648 containerd[2097]: time="2026-01-17T00:21:32.743609263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:32.744877 containerd[2097]: time="2026-01-17T00:21:32.744834047Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.450762252s" Jan 17 00:21:32.744961 containerd[2097]: time="2026-01-17T00:21:32.744883462Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:21:32.745591 containerd[2097]: time="2026-01-17T00:21:32.745499523Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:21:33.024156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:21:33.039908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:34.298722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:34.311178 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:34.355860 kubelet[2697]: E0117 00:21:34.355803 2697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:34.358485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:34.358771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:36.227278 containerd[2097]: time="2026-01-17T00:21:36.227212848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:36.228475 containerd[2097]: time="2026-01-17T00:21:36.228284223Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 17 00:21:36.229667 containerd[2097]: time="2026-01-17T00:21:36.229641976Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:36.232281 containerd[2097]: time="2026-01-17T00:21:36.232233914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:36.233851 containerd[2097]: time="2026-01-17T00:21:36.233254661Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.487715722s" Jan 17 00:21:36.233851 containerd[2097]: time="2026-01-17T00:21:36.233302388Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:21:36.234147 containerd[2097]: time="2026-01-17T00:21:36.234129142Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:21:37.235286 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:21:38.091003 containerd[2097]: time="2026-01-17T00:21:38.090942336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:38.092169 containerd[2097]: time="2026-01-17T00:21:38.092120100Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 17 00:21:38.093342 containerd[2097]: time="2026-01-17T00:21:38.093294376Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:38.096301 containerd[2097]: time="2026-01-17T00:21:38.096247684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:38.097346 containerd[2097]: time="2026-01-17T00:21:38.097232787Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.86301369s" Jan 17 00:21:38.097346 containerd[2097]: time="2026-01-17T00:21:38.097264453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:21:38.098377 containerd[2097]: time="2026-01-17T00:21:38.098358087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:21:39.076041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66801127.mount: Deactivated successfully. Jan 17 00:21:39.653967 containerd[2097]: time="2026-01-17T00:21:39.653918958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:39.655118 containerd[2097]: time="2026-01-17T00:21:39.654967620Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 17 00:21:39.657133 containerd[2097]: time="2026-01-17T00:21:39.656129707Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:39.658600 containerd[2097]: time="2026-01-17T00:21:39.658386033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:39.658964 containerd[2097]: time="2026-01-17T00:21:39.658940807Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.560461243s" Jan 17 00:21:39.659041 containerd[2097]: time="2026-01-17T00:21:39.659028072Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:21:39.659849 containerd[2097]: time="2026-01-17T00:21:39.659806591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:21:40.126650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815094952.mount: Deactivated successfully. Jan 17 00:21:41.154351 containerd[2097]: time="2026-01-17T00:21:41.152855086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:41.154351 containerd[2097]: time="2026-01-17T00:21:41.154105381Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 17 00:21:41.155161 containerd[2097]: time="2026-01-17T00:21:41.155125850Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:41.158202 containerd[2097]: time="2026-01-17T00:21:41.158166664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:41.159635 containerd[2097]: time="2026-01-17T00:21:41.159600687Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.499758257s" Jan 17 00:21:41.159738 containerd[2097]: time="2026-01-17T00:21:41.159643187Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:21:41.160238 containerd[2097]: time="2026-01-17T00:21:41.160205232Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:21:41.610449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366739549.mount: Deactivated successfully. Jan 17 00:21:41.616682 containerd[2097]: time="2026-01-17T00:21:41.616637039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:41.617480 containerd[2097]: time="2026-01-17T00:21:41.617432386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:21:41.619473 containerd[2097]: time="2026-01-17T00:21:41.618318211Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:41.621091 containerd[2097]: time="2026-01-17T00:21:41.620403307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:41.621091 containerd[2097]: time="2026-01-17T00:21:41.620976664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 460.629952ms" Jan 17 00:21:41.621091 containerd[2097]: time="2026-01-17T00:21:41.621002349Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:21:41.621940 containerd[2097]: time="2026-01-17T00:21:41.621835587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:21:42.117195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583034250.mount: Deactivated successfully. Jan 17 00:21:44.524186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:21:44.533824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:44.850109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:44.865119 (kubelet)[2845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:21:44.922737 kubelet[2845]: E0117 00:21:44.922649 2845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:21:44.926111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:21:44.926293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:21:45.341873 containerd[2097]: time="2026-01-17T00:21:45.341819344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:45.344434 containerd[2097]: time="2026-01-17T00:21:45.344115573Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 17 00:21:45.347441 containerd[2097]: time="2026-01-17T00:21:45.347394132Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:45.351973 containerd[2097]: time="2026-01-17T00:21:45.351928819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:21:45.353080 containerd[2097]: time="2026-01-17T00:21:45.353045643Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.731183077s" Jan 17 00:21:45.353080 containerd[2097]: time="2026-01-17T00:21:45.353078269Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:21:47.777489 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:47.790950 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:47.829477 systemd[1]: Reloading requested from client PID 2884 ('systemctl') (unit session-9.scope)... Jan 17 00:21:47.829495 systemd[1]: Reloading... Jan 17 00:21:47.972604 zram_generator::config[2927]: No configuration found. Jan 17 00:21:48.123201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:48.206881 systemd[1]: Reloading finished in 376 ms. Jan 17 00:21:48.261159 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:21:48.261280 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:21:48.261675 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:48.267451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:48.489748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:48.496959 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:21:48.569427 kubelet[2999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:21:48.569427 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:21:48.569427 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:21:48.569991 kubelet[2999]: I0117 00:21:48.569522 2999 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:21:48.955135 kubelet[2999]: I0117 00:21:48.955093 2999 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:21:48.955135 kubelet[2999]: I0117 00:21:48.955125 2999 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:21:48.955640 kubelet[2999]: I0117 00:21:48.955534 2999 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:21:48.995971 kubelet[2999]: E0117 00:21:48.995922 2999 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:48.998253 kubelet[2999]: I0117 00:21:48.998071 2999 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:21:49.013548 kubelet[2999]: E0117 00:21:49.013500 2999 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:21:49.013548 kubelet[2999]: I0117 00:21:49.013532 2999 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:21:49.019935 kubelet[2999]: I0117 00:21:49.019461 2999 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:21:49.024108 kubelet[2999]: I0117 00:21:49.023668 2999 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:21:49.024108 kubelet[2999]: I0117 00:21:49.023885 2999 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-137","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:21:49.025984 kubelet[2999]: I0117 00:21:49.025932 2999 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:21:49.025984 kubelet[2999]: I0117 00:21:49.025965 2999 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:21:49.027597 kubelet[2999]: I0117 00:21:49.027554 2999 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:21:49.033410 kubelet[2999]: I0117 00:21:49.033290 2999 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:21:49.033410 kubelet[2999]: I0117 00:21:49.033347 2999 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:21:49.033410 kubelet[2999]: I0117 00:21:49.033370 2999 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:21:49.033410 kubelet[2999]: I0117 00:21:49.033380 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:21:49.037711 kubelet[2999]: W0117 00:21:49.037593 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-137&limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:49.037711 kubelet[2999]: E0117 00:21:49.037666 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-137&limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:49.037936 kubelet[2999]: I0117 00:21:49.037911 2999 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:21:49.042199 kubelet[2999]: I0117 00:21:49.041709 2999 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:21:49.042199 kubelet[2999]: W0117 00:21:49.041774 2999 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:21:49.049677 kubelet[2999]: W0117 00:21:49.048437 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:49.049677 kubelet[2999]: E0117 00:21:49.048485 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:49.049677 kubelet[2999]: I0117 00:21:49.048756 2999 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:21:49.049677 kubelet[2999]: I0117 00:21:49.048782 2999 server.go:1287] "Started kubelet" Jan 17 00:21:49.049677 kubelet[2999]: I0117 00:21:49.049403 2999 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:21:49.051859 kubelet[2999]: I0117 00:21:49.051842 2999 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:21:49.056195 kubelet[2999]: I0117 00:21:49.056128 2999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:21:49.056405 kubelet[2999]: I0117 00:21:49.056387 2999 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:21:49.056847 kubelet[2999]: I0117 00:21:49.056820 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:21:49.061631 kubelet[2999]: E0117 00:21:49.057712 2999 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.137:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.137:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-137.188b5ccbcc4cf753 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-137,UID:ip-172-31-17-137,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-137,},FirstTimestamp:2026-01-17 00:21:49.048764243 +0000 UTC m=+0.524916110,LastTimestamp:2026-01-17 00:21:49.048764243 +0000 UTC m=+0.524916110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-137,}" Jan 17 00:21:49.061631 kubelet[2999]: I0117 00:21:49.061421 2999 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:21:49.066390 kubelet[2999]: E0117 00:21:49.066367 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:49.067243 kubelet[2999]: I0117 00:21:49.067225 2999 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:21:49.067523 kubelet[2999]: I0117 00:21:49.067513 2999 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:21:49.067649 kubelet[2999]: I0117 00:21:49.067641 2999 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:21:49.068017 kubelet[2999]: W0117 00:21:49.067986 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:49.068117 kubelet[2999]: E0117 00:21:49.068102 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:49.068775 kubelet[2999]: E0117 00:21:49.068217 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": dial tcp 172.31.17.137:6443: connect: connection refused" interval="200ms" Jan 17 00:21:49.068775 kubelet[2999]: I0117 00:21:49.068628 2999 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:21:49.071779 kubelet[2999]: E0117 00:21:49.071759 2999 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:21:49.074010 kubelet[2999]: I0117 00:21:49.073991 2999 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:21:49.074010 kubelet[2999]: I0117 00:21:49.074007 2999 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:21:49.085917 kubelet[2999]: I0117 00:21:49.085869 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:21:49.092496 kubelet[2999]: I0117 00:21:49.092471 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:21:49.092713 kubelet[2999]: I0117 00:21:49.092656 2999 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:21:49.092713 kubelet[2999]: I0117 00:21:49.092680 2999 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:21:49.092713 kubelet[2999]: I0117 00:21:49.092688 2999 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:21:49.100482 kubelet[2999]: E0117 00:21:49.099649 2999 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:21:49.101147 kubelet[2999]: W0117 00:21:49.101091 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:49.101223 kubelet[2999]: E0117 00:21:49.101146 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:49.105182 kubelet[2999]: I0117 00:21:49.105158 2999 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:21:49.105182 kubelet[2999]: I0117 00:21:49.105174 2999 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:21:49.105182 kubelet[2999]: I0117 00:21:49.105190 2999 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:21:49.107632 kubelet[2999]: I0117 00:21:49.107607 2999 policy_none.go:49] "None policy: Start" Jan 17 00:21:49.107632 kubelet[2999]: I0117 00:21:49.107637 2999 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:21:49.107731 kubelet[2999]: I0117 00:21:49.107648 2999 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:21:49.112158 kubelet[2999]: I0117 00:21:49.112134 2999 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:21:49.114052 kubelet[2999]: I0117 00:21:49.112432 2999 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:21:49.114052 kubelet[2999]: I0117 00:21:49.112446 2999 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:21:49.114052 kubelet[2999]: I0117 00:21:49.113806 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:21:49.114536 kubelet[2999]: E0117 00:21:49.114517 2999 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:21:49.114615 kubelet[2999]: E0117 00:21:49.114553 2999 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-137\" not found" Jan 17 00:21:49.207186 kubelet[2999]: E0117 00:21:49.207093 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:49.210024 kubelet[2999]: E0117 00:21:49.209994 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:49.213001 kubelet[2999]: E0117 00:21:49.212979 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:49.215207 kubelet[2999]: I0117 00:21:49.215183 2999 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-137" Jan 17 00:21:49.215672 kubelet[2999]: E0117 00:21:49.215625 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.137:6443/api/v1/nodes\": dial tcp 172.31.17.137:6443: connect: connection refused" node="ip-172-31-17-137" Jan 17 00:21:49.269264 kubelet[2999]: I0117 00:21:49.269018 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:49.269264 kubelet[2999]: I0117 00:21:49.269059 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:49.269264 kubelet[2999]: I0117 00:21:49.269082 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99e4ab7890ecf57c1ad633f8da4d48c4-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-137\" (UID: \"99e4ab7890ecf57c1ad633f8da4d48c4\") " pod="kube-system/kube-scheduler-ip-172-31-17-137" Jan 17 00:21:49.269264 kubelet[2999]: I0117 00:21:49.269098 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46a94013ce399bb994b809b96c4ccaff-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-137\" (UID: \"46a94013ce399bb994b809b96c4ccaff\") " pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:49.269264 kubelet[2999]: I0117 00:21:49.269117 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46a94013ce399bb994b809b96c4ccaff-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-137\" (UID: \"46a94013ce399bb994b809b96c4ccaff\") " pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:49.269502 kubelet[2999]: I0117 00:21:49.269132 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:49.269502 kubelet[2999]: I0117 00:21:49.269148 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:49.269502 kubelet[2999]: I0117 00:21:49.269164 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46a94013ce399bb994b809b96c4ccaff-ca-certs\") pod \"kube-apiserver-ip-172-31-17-137\" (UID: \"46a94013ce399bb994b809b96c4ccaff\") " pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:49.269502 kubelet[2999]: I0117 00:21:49.269179 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:49.269502 kubelet[2999]: E0117 00:21:49.269229 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": dial tcp 172.31.17.137:6443: connect: connection refused" interval="400ms" Jan 17 00:21:49.417753 kubelet[2999]: I0117 00:21:49.417714 2999 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-137" Jan 17 00:21:49.418111 kubelet[2999]: E0117 00:21:49.418081 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.137:6443/api/v1/nodes\": dial tcp 172.31.17.137:6443: connect: connection refused" node="ip-172-31-17-137" Jan 17 00:21:49.508973 containerd[2097]: time="2026-01-17T00:21:49.508863809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-137,Uid:46a94013ce399bb994b809b96c4ccaff,Namespace:kube-system,Attempt:0,}" Jan 17 00:21:49.511742 containerd[2097]: time="2026-01-17T00:21:49.511483338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-137,Uid:c3e529a9a15c0a3b38c56208ec0bd099,Namespace:kube-system,Attempt:0,}" Jan 17 00:21:49.514379 containerd[2097]: time="2026-01-17T00:21:49.514351252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-137,Uid:99e4ab7890ecf57c1ad633f8da4d48c4,Namespace:kube-system,Attempt:0,}" Jan 17 00:21:49.670712 kubelet[2999]: E0117 00:21:49.670668 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": dial tcp 172.31.17.137:6443: connect: connection refused" interval="800ms" Jan 17 00:21:49.819956 kubelet[2999]: I0117 00:21:49.819928 2999 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-137" Jan 17 00:21:49.820259 kubelet[2999]: E0117 00:21:49.820236 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.137:6443/api/v1/nodes\": dial tcp 172.31.17.137:6443: connect: connection refused" node="ip-172-31-17-137" Jan 17 00:21:49.968121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1517411735.mount: Deactivated successfully. Jan 17 00:21:49.974229 containerd[2097]: time="2026-01-17T00:21:49.974180720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:49.976449 containerd[2097]: time="2026-01-17T00:21:49.975013478Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:49.976449 containerd[2097]: time="2026-01-17T00:21:49.975858428Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:49.977041 containerd[2097]: time="2026-01-17T00:21:49.977004774Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:21:49.978073 containerd[2097]: time="2026-01-17T00:21:49.978030450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:21:49.979181 containerd[2097]: time="2026-01-17T00:21:49.978977734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:21:49.980280 containerd[2097]: time="2026-01-17T00:21:49.980227043Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:49.983360 containerd[2097]: time="2026-01-17T00:21:49.983245691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:21:49.984426 containerd[2097]: time="2026-01-17T00:21:49.984394896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.84349ms" Jan 17 00:21:49.995034 containerd[2097]: time="2026-01-17T00:21:49.994889390Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.361737ms" Jan 17 00:21:50.003443 containerd[2097]: time="2026-01-17T00:21:50.003345425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.344509ms" Jan 17 00:21:50.008185 kubelet[2999]: W0117 00:21:50.008128 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:50.008293 kubelet[2999]: E0117 00:21:50.008195 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:50.081713 kubelet[2999]: W0117 00:21:50.081529 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:50.081713 kubelet[2999]: E0117 00:21:50.081624 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:50.289255 containerd[2097]: time="2026-01-17T00:21:50.288992302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:50.290589 containerd[2097]: time="2026-01-17T00:21:50.289879250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:50.290589 containerd[2097]: time="2026-01-17T00:21:50.289965899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:50.290589 containerd[2097]: time="2026-01-17T00:21:50.290098119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:50.291079 containerd[2097]: time="2026-01-17T00:21:50.290996276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:50.291238 containerd[2097]: time="2026-01-17T00:21:50.291196542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:50.291338 containerd[2097]: time="2026-01-17T00:21:50.291309113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:50.292499 containerd[2097]: time="2026-01-17T00:21:50.292137065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:21:50.292499 containerd[2097]: time="2026-01-17T00:21:50.292215669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:21:50.292499 containerd[2097]: time="2026-01-17T00:21:50.292240018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:50.292499 containerd[2097]: time="2026-01-17T00:21:50.292423948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:50.292824 containerd[2097]: time="2026-01-17T00:21:50.291487547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:21:50.421847 containerd[2097]: time="2026-01-17T00:21:50.420732711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-137,Uid:99e4ab7890ecf57c1ad633f8da4d48c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"13390b1dd422724cacf5beb7f2646eafd7b368a90923d83482b23da774003e76\"" Jan 17 00:21:50.423398 containerd[2097]: time="2026-01-17T00:21:50.422923348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-137,Uid:46a94013ce399bb994b809b96c4ccaff,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c4b3d10f0917c91f1d6e1de0a2200f8872295621e3297385aa4fc0247d10c7c\"" Jan 17 00:21:50.424748 containerd[2097]: time="2026-01-17T00:21:50.424717305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-137,Uid:c3e529a9a15c0a3b38c56208ec0bd099,Namespace:kube-system,Attempt:0,} returns sandbox id \"89de8d603945b67e748db6a5fc0951b8408b3377e0e5ee981660fc75a4863437\"" Jan 17 00:21:50.427461 containerd[2097]: time="2026-01-17T00:21:50.427430516Z" level=info msg="CreateContainer within sandbox \"13390b1dd422724cacf5beb7f2646eafd7b368a90923d83482b23da774003e76\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:21:50.429028 containerd[2097]: time="2026-01-17T00:21:50.428980898Z" level=info msg="CreateContainer within sandbox \"3c4b3d10f0917c91f1d6e1de0a2200f8872295621e3297385aa4fc0247d10c7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:21:50.429442 containerd[2097]: time="2026-01-17T00:21:50.429416061Z" level=info msg="CreateContainer within sandbox \"89de8d603945b67e748db6a5fc0951b8408b3377e0e5ee981660fc75a4863437\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:21:50.457503 containerd[2097]: time="2026-01-17T00:21:50.457454666Z" level=info msg="CreateContainer within sandbox \"89de8d603945b67e748db6a5fc0951b8408b3377e0e5ee981660fc75a4863437\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2009fb39d125e262d7b5b26f37c8048e981f146e66fac7f48dba880af1c6c466\"" Jan 17 00:21:50.458677 containerd[2097]: time="2026-01-17T00:21:50.458504325Z" level=info msg="StartContainer for \"2009fb39d125e262d7b5b26f37c8048e981f146e66fac7f48dba880af1c6c466\"" Jan 17 00:21:50.459981 kubelet[2999]: W0117 00:21:50.459838 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:50.459981 kubelet[2999]: E0117 00:21:50.459920 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:50.461449 containerd[2097]: time="2026-01-17T00:21:50.461244338Z" level=info msg="CreateContainer within sandbox \"3c4b3d10f0917c91f1d6e1de0a2200f8872295621e3297385aa4fc0247d10c7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"282c511a6f3363e65478808b04b9909ad0c50d46b1f8aa5fe8174109de70d0fc\"" Jan 17 00:21:50.463977 containerd[2097]: time="2026-01-17T00:21:50.463773944Z" level=info msg="CreateContainer within sandbox \"13390b1dd422724cacf5beb7f2646eafd7b368a90923d83482b23da774003e76\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"10173f6eec0b0f88f42962a2aed2e1e0312bae10bffc6746b0093a7b7bfe9d33\"" Jan 17 00:21:50.464663 containerd[2097]: time="2026-01-17T00:21:50.464635257Z" level=info msg="StartContainer for \"10173f6eec0b0f88f42962a2aed2e1e0312bae10bffc6746b0093a7b7bfe9d33\"" Jan 17 00:21:50.470682 containerd[2097]: time="2026-01-17T00:21:50.470185013Z" level=info msg="StartContainer for \"282c511a6f3363e65478808b04b9909ad0c50d46b1f8aa5fe8174109de70d0fc\"" Jan 17 00:21:50.472182 kubelet[2999]: E0117 00:21:50.472109 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": dial tcp 172.31.17.137:6443: connect: connection refused" interval="1.6s" Jan 17 00:21:50.547421 kubelet[2999]: W0117 00:21:50.547101 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-137&limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:50.547918 kubelet[2999]: E0117 00:21:50.547768 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-137&limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:50.611299 containerd[2097]: time="2026-01-17T00:21:50.610823656Z" level=info msg="StartContainer for \"10173f6eec0b0f88f42962a2aed2e1e0312bae10bffc6746b0093a7b7bfe9d33\" returns successfully" Jan 17 00:21:50.627247 kubelet[2999]: I0117 00:21:50.625862 2999 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-137" Jan 17 00:21:50.627247 kubelet[2999]: E0117 00:21:50.626245 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.137:6443/api/v1/nodes\": dial tcp 172.31.17.137:6443: connect: connection refused" node="ip-172-31-17-137" Jan 17 00:21:50.639437 containerd[2097]: time="2026-01-17T00:21:50.639126811Z" level=info msg="StartContainer for \"2009fb39d125e262d7b5b26f37c8048e981f146e66fac7f48dba880af1c6c466\" returns successfully" Jan 17 00:21:50.643016 containerd[2097]: time="2026-01-17T00:21:50.642637740Z" level=info msg="StartContainer for \"282c511a6f3363e65478808b04b9909ad0c50d46b1f8aa5fe8174109de70d0fc\" returns successfully" Jan 17 00:21:51.024588 kubelet[2999]: E0117 00:21:51.024093 2999 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:51.086434 update_engine[2079]: I20260117 00:21:51.085604 2079 update_attempter.cc:509] Updating boot flags... Jan 17 00:21:51.118937 kubelet[2999]: E0117 00:21:51.118903 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:51.124191 kubelet[2999]: E0117 00:21:51.123580 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:51.128182 kubelet[2999]: E0117 00:21:51.127978 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:51.167594 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3276) Jan 17 00:21:52.073022 kubelet[2999]: E0117 00:21:52.072911 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": dial tcp 172.31.17.137:6443: connect: connection refused" interval="3.2s" Jan 17 00:21:52.128092 kubelet[2999]: E0117 00:21:52.127822 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:52.128092 kubelet[2999]: E0117 00:21:52.127973 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:52.228588 kubelet[2999]: I0117 00:21:52.228253 2999 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-137" Jan 17 00:21:52.228588 kubelet[2999]: E0117 00:21:52.228537 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.137:6443/api/v1/nodes\": dial tcp 172.31.17.137:6443: connect: connection refused" node="ip-172-31-17-137" Jan 17 00:21:52.607688 kubelet[2999]: W0117 00:21:52.607627 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:52.607841 kubelet[2999]: E0117 00:21:52.607692 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:53.076884 kubelet[2999]: W0117 00:21:53.076812 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:53.077385 kubelet[2999]: E0117 00:21:53.076900 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:53.098915 kubelet[2999]: W0117 00:21:53.098800 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-137&limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:53.098915 kubelet[2999]: E0117 00:21:53.098886 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-137&limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:53.109866 kubelet[2999]: W0117 00:21:53.109735 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.137:6443: connect: connection refused Jan 17 00:21:53.109866 kubelet[2999]: E0117 00:21:53.109818 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:53.130405 kubelet[2999]: E0117 00:21:53.130333 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:55.180979 kubelet[2999]: E0117 00:21:55.180924 2999 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.137:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:21:55.274722 kubelet[2999]: E0117 00:21:55.274358 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": dial tcp 172.31.17.137:6443: connect: connection refused" interval="6.4s" Jan 17 00:21:55.431172 kubelet[2999]: I0117 00:21:55.430651 2999 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-137" Jan 17 00:21:55.653689 kubelet[2999]: E0117 00:21:55.653439 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:56.196840 kubelet[2999]: E0117 00:21:56.196788 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-137\" not found" node="ip-172-31-17-137" Jan 17 00:21:56.917110 kubelet[2999]: I0117 00:21:56.916920 2999 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-137" Jan 17 00:21:56.917110 kubelet[2999]: E0117 00:21:56.916958 2999 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-137\": node \"ip-172-31-17-137\" not found" Jan 17 00:21:56.945261 kubelet[2999]: E0117 00:21:56.945226 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.045779 kubelet[2999]: E0117 00:21:57.045736 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.146392 kubelet[2999]: E0117 00:21:57.146345 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.247386 kubelet[2999]: E0117 00:21:57.247253 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.348131 kubelet[2999]: E0117 00:21:57.348081 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.448927 kubelet[2999]: E0117 00:21:57.448873 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.549989 kubelet[2999]: E0117 00:21:57.549944 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.650797 kubelet[2999]: E0117 00:21:57.650753 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.751513 kubelet[2999]: E0117 00:21:57.751468 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.852222 kubelet[2999]: E0117 00:21:57.852084 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:57.952972 kubelet[2999]: E0117 00:21:57.952907 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:58.053761 kubelet[2999]: E0117 00:21:58.053706 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:58.154730 kubelet[2999]: E0117 00:21:58.154468 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:58.255498 kubelet[2999]: E0117 00:21:58.255457 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:58.356445 kubelet[2999]: E0117 00:21:58.356398 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:58.467519 kubelet[2999]: I0117 00:21:58.467403 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:58.478149 kubelet[2999]: I0117 00:21:58.478101 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:58.482603 kubelet[2999]: I0117 00:21:58.482550 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-137" Jan 17 00:21:58.953082 systemd[1]: Reloading requested from client PID 3370 ('systemctl') (unit session-9.scope)... Jan 17 00:21:58.953103 systemd[1]: Reloading... Jan 17 00:21:59.026588 zram_generator::config[3411]: No configuration found. Jan 17 00:21:59.052481 kubelet[2999]: I0117 00:21:59.052445 2999 apiserver.go:52] "Watching apiserver" Jan 17 00:21:59.068580 kubelet[2999]: I0117 00:21:59.068074 2999 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:21:59.129392 kubelet[2999]: I0117 00:21:59.129256 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-137" podStartSLOduration=1.129239857 podStartE2EDuration="1.129239857s" podCreationTimestamp="2026-01-17 00:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:21:59.115370818 +0000 UTC m=+10.591522691" watchObservedRunningTime="2026-01-17 00:21:59.129239857 +0000 UTC m=+10.605391727" Jan 17 00:21:59.154058 kubelet[2999]: I0117 00:21:59.153945 2999 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-137" podStartSLOduration=1.153927417 podStartE2EDuration="1.153927417s" podCreationTimestamp="2026-01-17 00:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:21:59.137283258 +0000 UTC m=+10.613435132" watchObservedRunningTime="2026-01-17 00:21:59.153927417 +0000 UTC m=+10.630079290" Jan 17 00:21:59.193749 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:21:59.286807 systemd[1]: Reloading finished in 333 ms. Jan 17 00:21:59.315638 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:59.316246 kubelet[2999]: I0117 00:21:59.315631 2999 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:21:59.330070 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:21:59.330372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:59.336072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:21:59.571809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:21:59.582057 (kubelet)[3480]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:21:59.636256 kubelet[3480]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:21:59.637679 kubelet[3480]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:21:59.637679 kubelet[3480]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:21:59.637679 kubelet[3480]: I0117 00:21:59.636749 3480 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:21:59.643507 kubelet[3480]: I0117 00:21:59.643469 3480 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:21:59.643507 kubelet[3480]: I0117 00:21:59.643504 3480 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:21:59.647579 kubelet[3480]: I0117 00:21:59.645642 3480 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:21:59.648543 kubelet[3480]: I0117 00:21:59.648520 3480 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:21:59.653978 kubelet[3480]: I0117 00:21:59.653949 3480 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:21:59.658782 kubelet[3480]: E0117 00:21:59.658748 3480 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:21:59.658782 kubelet[3480]: I0117 00:21:59.658778 3480 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:21:59.661779 kubelet[3480]: I0117 00:21:59.661747 3480 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:21:59.662243 kubelet[3480]: I0117 00:21:59.662209 3480 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:21:59.662419 kubelet[3480]: I0117 00:21:59.662238 3480 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-137","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:21:59.662419 kubelet[3480]: I0117 00:21:59.662418 3480 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:21:59.662574 kubelet[3480]: I0117 00:21:59.662429 3480 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:21:59.662574 kubelet[3480]: I0117 00:21:59.662473 3480 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:21:59.662630 kubelet[3480]: I0117 00:21:59.662625 3480 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:21:59.662654 kubelet[3480]: I0117 00:21:59.662643 3480 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:21:59.662683 kubelet[3480]: I0117 00:21:59.662661 3480 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:21:59.665663 kubelet[3480]: I0117 00:21:59.665635 3480 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:21:59.670842 kubelet[3480]: I0117 00:21:59.666675 3480 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:21:59.670842 kubelet[3480]: I0117 00:21:59.667037 3480 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:21:59.670842 kubelet[3480]: I0117 00:21:59.667427 3480 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:21:59.670842 kubelet[3480]: I0117 00:21:59.667450 3480 server.go:1287] "Started kubelet" Jan 17 00:21:59.678996 kubelet[3480]: I0117 00:21:59.678973 3480 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:21:59.689372 kubelet[3480]: I0117 00:21:59.689339 3480 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:21:59.690813 kubelet[3480]: I0117 00:21:59.690789 3480 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:21:59.692312 kubelet[3480]: I0117 00:21:59.692293 3480 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:21:59.693958 kubelet[3480]: I0117 00:21:59.693946 3480 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:21:59.694443 kubelet[3480]: I0117 00:21:59.692332 3480 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:21:59.695084 kubelet[3480]: I0117 00:21:59.695071 3480 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:21:59.696148 kubelet[3480]: I0117 00:21:59.696134 3480 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:21:59.697434 kubelet[3480]: I0117 00:21:59.696712 3480 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:21:59.702531 kubelet[3480]: E0117 00:21:59.701212 3480 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-137\" not found" Jan 17 00:21:59.704387 kubelet[3480]: I0117 00:21:59.704370 3480 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:21:59.704593 kubelet[3480]: I0117 00:21:59.704574 3480 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:21:59.708808 kubelet[3480]: E0117 00:21:59.708787 3480 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:21:59.708951 kubelet[3480]: I0117 00:21:59.708633 3480 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:21:59.709158 kubelet[3480]: I0117 00:21:59.709133 3480 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:21:59.710868 kubelet[3480]: I0117 00:21:59.710677 3480 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:21:59.710868 kubelet[3480]: I0117 00:21:59.710704 3480 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:21:59.710868 kubelet[3480]: I0117 00:21:59.710719 3480 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:21:59.710868 kubelet[3480]: I0117 00:21:59.710725 3480 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:21:59.710868 kubelet[3480]: E0117 00:21:59.710769 3480 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:21:59.773219 kubelet[3480]: I0117 00:21:59.773176 3480 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:21:59.773219 kubelet[3480]: I0117 00:21:59.773194 3480 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:21:59.773219 kubelet[3480]: I0117 00:21:59.773211 3480 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:21:59.773407 kubelet[3480]: I0117 00:21:59.773364 3480 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:21:59.773407 kubelet[3480]: I0117 00:21:59.773373 3480 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:21:59.773407 kubelet[3480]: I0117 00:21:59.773390 3480 policy_none.go:49] "None policy: Start" Jan 17 00:21:59.773407 kubelet[3480]: I0117 00:21:59.773399 3480 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:21:59.773407 kubelet[3480]: I0117 00:21:59.773407 3480 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:21:59.773553 kubelet[3480]: I0117 00:21:59.773497 3480 state_mem.go:75] "Updated machine memory state" Jan 17 00:21:59.775947 kubelet[3480]: I0117 00:21:59.774674 3480 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:21:59.775947 kubelet[3480]: I0117 00:21:59.774826 3480 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:21:59.775947 kubelet[3480]: I0117 00:21:59.774835 3480 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:21:59.775947 kubelet[3480]: I0117 00:21:59.775802 3480 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:21:59.777425 kubelet[3480]: E0117 00:21:59.777390 3480 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:21:59.811434 kubelet[3480]: I0117 00:21:59.811377 3480 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:59.813171 kubelet[3480]: I0117 00:21:59.812711 3480 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:59.813171 kubelet[3480]: I0117 00:21:59.812800 3480 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-137" Jan 17 00:21:59.817691 kubelet[3480]: E0117 00:21:59.817662 3480 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-137\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:59.818492 kubelet[3480]: E0117 00:21:59.818418 3480 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-137\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:59.818492 kubelet[3480]: E0117 00:21:59.818424 3480 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-137\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-137" Jan 17 00:21:59.881259 kubelet[3480]: I0117 00:21:59.878671 3480 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-137" Jan 17 00:21:59.888682 kubelet[3480]: I0117 00:21:59.888644 3480 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-137" Jan 17 00:21:59.888819 kubelet[3480]: I0117 00:21:59.888724 3480 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-137" Jan 17 00:21:59.899313 kubelet[3480]: I0117 00:21:59.899034 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:59.899313 kubelet[3480]: I0117 00:21:59.899085 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:59.899313 kubelet[3480]: I0117 00:21:59.899103 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46a94013ce399bb994b809b96c4ccaff-ca-certs\") pod \"kube-apiserver-ip-172-31-17-137\" (UID: \"46a94013ce399bb994b809b96c4ccaff\") " pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:59.899313 kubelet[3480]: I0117 00:21:59.899118 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46a94013ce399bb994b809b96c4ccaff-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-137\" (UID: \"46a94013ce399bb994b809b96c4ccaff\") " pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:59.899313 kubelet[3480]: I0117 00:21:59.899153 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:59.899554 kubelet[3480]: I0117 00:21:59.899170 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:59.899554 kubelet[3480]: I0117 00:21:59.899185 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99e4ab7890ecf57c1ad633f8da4d48c4-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-137\" (UID: \"99e4ab7890ecf57c1ad633f8da4d48c4\") " pod="kube-system/kube-scheduler-ip-172-31-17-137" Jan 17 00:21:59.900900 kubelet[3480]: I0117 00:21:59.899584 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46a94013ce399bb994b809b96c4ccaff-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-137\" (UID: \"46a94013ce399bb994b809b96c4ccaff\") " pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:21:59.900900 kubelet[3480]: I0117 00:21:59.899614 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c3e529a9a15c0a3b38c56208ec0bd099-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-137\" (UID: \"c3e529a9a15c0a3b38c56208ec0bd099\") " pod="kube-system/kube-controller-manager-ip-172-31-17-137" Jan 17 00:21:59.972890 sudo[3511]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:21:59.973209 sudo[3511]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:22:00.666931 kubelet[3480]: I0117 00:22:00.666289 3480 apiserver.go:52] "Watching apiserver" Jan 17 00:22:00.697470 kubelet[3480]: I0117 00:22:00.697404 3480 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:22:00.742189 kubelet[3480]: I0117 00:22:00.739801 3480 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:22:00.742189 kubelet[3480]: I0117 00:22:00.740164 3480 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-137" Jan 17 00:22:00.746932 kubelet[3480]: E0117 00:22:00.746787 3480 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-137\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-137" Jan 17 00:22:00.747543 kubelet[3480]: E0117 00:22:00.747526 3480 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-137\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-137" Jan 17 00:22:00.801621 sudo[3511]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:02.989931 sudo[2475]: pam_unix(sudo:session): session closed for user root Jan 17 00:22:03.068454 sshd[2471]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:03.072030 systemd[1]: sshd@8-172.31.17.137:22-4.153.228.146:56354.service: Deactivated successfully. Jan 17 00:22:03.074577 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:22:03.075829 systemd-logind[2077]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:22:03.077355 systemd-logind[2077]: Removed session 9. Jan 17 00:22:04.566099 kubelet[3480]: I0117 00:22:04.566060 3480 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:22:04.566619 kubelet[3480]: I0117 00:22:04.566575 3480 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:22:04.566658 containerd[2097]: time="2026-01-17T00:22:04.566397908Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:22:04.626341 kubelet[3480]: I0117 00:22:04.626278 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-etc-cni-netd\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626341 kubelet[3480]: I0117 00:22:04.626330 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-host-proc-sys-kernel\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626341 kubelet[3480]: I0117 00:22:04.626349 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eea301b6-9e40-4e9e-a281-bcc570fd4fa3-lib-modules\") pod \"kube-proxy-qwmvn\" (UID: \"eea301b6-9e40-4e9e-a281-bcc570fd4fa3\") " pod="kube-system/kube-proxy-qwmvn" Jan 17 00:22:04.626533 kubelet[3480]: I0117 00:22:04.626365 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgt6\" (UniqueName: \"kubernetes.io/projected/eea301b6-9e40-4e9e-a281-bcc570fd4fa3-kube-api-access-fsgt6\") pod \"kube-proxy-qwmvn\" (UID: \"eea301b6-9e40-4e9e-a281-bcc570fd4fa3\") " pod="kube-system/kube-proxy-qwmvn" Jan 17 00:22:04.626533 kubelet[3480]: I0117 00:22:04.626385 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-hostproc\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626533 kubelet[3480]: I0117 00:22:04.626400 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-lib-modules\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626533 kubelet[3480]: I0117 00:22:04.626414 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-bpf-maps\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626533 kubelet[3480]: I0117 00:22:04.626428 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-host-proc-sys-net\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626533 kubelet[3480]: I0117 00:22:04.626443 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cilium-cgroup\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626736 kubelet[3480]: I0117 00:22:04.626456 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cni-path\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626736 kubelet[3480]: I0117 00:22:04.626470 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cilium-run\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626736 kubelet[3480]: I0117 00:22:04.626484 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eea301b6-9e40-4e9e-a281-bcc570fd4fa3-xtables-lock\") pod \"kube-proxy-qwmvn\" (UID: \"eea301b6-9e40-4e9e-a281-bcc570fd4fa3\") " pod="kube-system/kube-proxy-qwmvn" Jan 17 00:22:04.626736 kubelet[3480]: I0117 00:22:04.626497 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx9n4\" (UniqueName: \"kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-kube-api-access-xx9n4\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626736 kubelet[3480]: I0117 00:22:04.626511 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eea301b6-9e40-4e9e-a281-bcc570fd4fa3-kube-proxy\") pod \"kube-proxy-qwmvn\" (UID: \"eea301b6-9e40-4e9e-a281-bcc570fd4fa3\") " pod="kube-system/kube-proxy-qwmvn" Jan 17 00:22:04.626736 kubelet[3480]: I0117 00:22:04.626524 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-xtables-lock\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626898 kubelet[3480]: I0117 00:22:04.626538 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69f244c0-5566-4f41-a65d-970d7e108157-clustermesh-secrets\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626898 kubelet[3480]: I0117 00:22:04.626554 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f244c0-5566-4f41-a65d-970d7e108157-cilium-config-path\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.626898 kubelet[3480]: I0117 00:22:04.626594 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-hubble-tls\") pod \"cilium-pbkpm\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " pod="kube-system/cilium-pbkpm" Jan 17 00:22:04.745482 kubelet[3480]: E0117 00:22:04.745379 3480 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:22:04.745482 kubelet[3480]: E0117 00:22:04.745377 3480 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:22:04.745482 kubelet[3480]: E0117 00:22:04.745417 3480 projected.go:194] Error preparing data for projected volume kube-api-access-xx9n4 for pod kube-system/cilium-pbkpm: configmap "kube-root-ca.crt" not found Jan 17 00:22:04.745482 kubelet[3480]: E0117 00:22:04.745426 3480 projected.go:194] Error preparing data for projected volume kube-api-access-fsgt6 for pod kube-system/kube-proxy-qwmvn: configmap "kube-root-ca.crt" not found Jan 17 00:22:04.745694 kubelet[3480]: E0117 00:22:04.745498 3480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-kube-api-access-xx9n4 podName:69f244c0-5566-4f41-a65d-970d7e108157 nodeName:}" failed. No retries permitted until 2026-01-17 00:22:05.245479301 +0000 UTC m=+5.653756878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xx9n4" (UniqueName: "kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-kube-api-access-xx9n4") pod "cilium-pbkpm" (UID: "69f244c0-5566-4f41-a65d-970d7e108157") : configmap "kube-root-ca.crt" not found Jan 17 00:22:04.746883 kubelet[3480]: E0117 00:22:04.746749 3480 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eea301b6-9e40-4e9e-a281-bcc570fd4fa3-kube-api-access-fsgt6 podName:eea301b6-9e40-4e9e-a281-bcc570fd4fa3 nodeName:}" failed. No retries permitted until 2026-01-17 00:22:05.246730407 +0000 UTC m=+5.655007985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fsgt6" (UniqueName: "kubernetes.io/projected/eea301b6-9e40-4e9e-a281-bcc570fd4fa3-kube-api-access-fsgt6") pod "kube-proxy-qwmvn" (UID: "eea301b6-9e40-4e9e-a281-bcc570fd4fa3") : configmap "kube-root-ca.crt" not found Jan 17 00:22:05.395845 containerd[2097]: time="2026-01-17T00:22:05.395444451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwmvn,Uid:eea301b6-9e40-4e9e-a281-bcc570fd4fa3,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:05.399327 containerd[2097]: time="2026-01-17T00:22:05.399271345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbkpm,Uid:69f244c0-5566-4f41-a65d-970d7e108157,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:05.436319 containerd[2097]: time="2026-01-17T00:22:05.435667372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:05.436319 containerd[2097]: time="2026-01-17T00:22:05.435735175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:05.436319 containerd[2097]: time="2026-01-17T00:22:05.435758793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:05.436319 containerd[2097]: time="2026-01-17T00:22:05.435865169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:05.448598 containerd[2097]: time="2026-01-17T00:22:05.448248891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:05.448598 containerd[2097]: time="2026-01-17T00:22:05.448445371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:05.448598 containerd[2097]: time="2026-01-17T00:22:05.448471505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:05.449184 containerd[2097]: time="2026-01-17T00:22:05.448863315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:05.610835 containerd[2097]: time="2026-01-17T00:22:05.609457155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pbkpm,Uid:69f244c0-5566-4f41-a65d-970d7e108157,Namespace:kube-system,Attempt:0,} returns sandbox id \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\"" Jan 17 00:22:05.613762 containerd[2097]: time="2026-01-17T00:22:05.613711182Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:22:05.631900 containerd[2097]: time="2026-01-17T00:22:05.631854671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwmvn,Uid:eea301b6-9e40-4e9e-a281-bcc570fd4fa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"eae92e23628695ef546261c3b406205ad6c5baad31d987dc6d99ea749466174c\"" Jan 17 00:22:05.633098 kubelet[3480]: I0117 00:22:05.633065 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac47a62d-0ad5-4599-bdfb-4a502444a7a2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-l7mgl\" (UID: \"ac47a62d-0ad5-4599-bdfb-4a502444a7a2\") " pod="kube-system/cilium-operator-6c4d7847fc-l7mgl" Jan 17 00:22:05.633949 kubelet[3480]: I0117 00:22:05.633112 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdk8x\" (UniqueName: \"kubernetes.io/projected/ac47a62d-0ad5-4599-bdfb-4a502444a7a2-kube-api-access-gdk8x\") pod \"cilium-operator-6c4d7847fc-l7mgl\" (UID: \"ac47a62d-0ad5-4599-bdfb-4a502444a7a2\") " pod="kube-system/cilium-operator-6c4d7847fc-l7mgl" Jan 17 00:22:05.635826 containerd[2097]: time="2026-01-17T00:22:05.635787101Z" level=info msg="CreateContainer within sandbox \"eae92e23628695ef546261c3b406205ad6c5baad31d987dc6d99ea749466174c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:22:05.658703 containerd[2097]: time="2026-01-17T00:22:05.658176741Z" level=info msg="CreateContainer within sandbox \"eae92e23628695ef546261c3b406205ad6c5baad31d987dc6d99ea749466174c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c340cf992384a4be52a9782d6a08ac5272edb7bfe5e96ba106daf4975da646ce\"" Jan 17 00:22:05.659549 containerd[2097]: time="2026-01-17T00:22:05.659513584Z" level=info msg="StartContainer for \"c340cf992384a4be52a9782d6a08ac5272edb7bfe5e96ba106daf4975da646ce\"" Jan 17 00:22:05.721931 containerd[2097]: time="2026-01-17T00:22:05.721881945Z" level=info msg="StartContainer for \"c340cf992384a4be52a9782d6a08ac5272edb7bfe5e96ba106daf4975da646ce\" returns successfully" Jan 17 00:22:05.800023 kubelet[3480]: I0117 00:22:05.799137 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qwmvn" podStartSLOduration=1.7991204 podStartE2EDuration="1.7991204s" podCreationTimestamp="2026-01-17 00:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:05.767426204 +0000 UTC m=+6.175703802" watchObservedRunningTime="2026-01-17 00:22:05.7991204 +0000 UTC m=+6.207397977" Jan 17 00:22:05.818085 containerd[2097]: time="2026-01-17T00:22:05.818040356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l7mgl,Uid:ac47a62d-0ad5-4599-bdfb-4a502444a7a2,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:05.849096 containerd[2097]: time="2026-01-17T00:22:05.848897036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:05.849408 containerd[2097]: time="2026-01-17T00:22:05.849230520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:05.849408 containerd[2097]: time="2026-01-17T00:22:05.849259871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:05.849408 containerd[2097]: time="2026-01-17T00:22:05.849352118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:05.914105 containerd[2097]: time="2026-01-17T00:22:05.913894026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l7mgl,Uid:ac47a62d-0ad5-4599-bdfb-4a502444a7a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\"" Jan 17 00:22:11.349100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363748781.mount: Deactivated successfully. Jan 17 00:22:13.785000 containerd[2097]: time="2026-01-17T00:22:13.784949755Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:13.789004 containerd[2097]: time="2026-01-17T00:22:13.788936626Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:22:13.795672 containerd[2097]: time="2026-01-17T00:22:13.795630857Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:13.803063 containerd[2097]: time="2026-01-17T00:22:13.797319181Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.183545176s" Jan 17 00:22:13.803063 containerd[2097]: time="2026-01-17T00:22:13.797367210Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:22:13.803063 containerd[2097]: time="2026-01-17T00:22:13.798835397Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:22:13.803063 containerd[2097]: time="2026-01-17T00:22:13.800071620Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:22:13.885651 containerd[2097]: time="2026-01-17T00:22:13.885596571Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\"" Jan 17 00:22:13.886362 containerd[2097]: time="2026-01-17T00:22:13.886335131Z" level=info msg="StartContainer for \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\"" Jan 17 00:22:14.010873 containerd[2097]: time="2026-01-17T00:22:14.010831467Z" level=info msg="StartContainer for \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\" returns successfully" Jan 17 00:22:14.065382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac-rootfs.mount: Deactivated successfully. Jan 17 00:22:14.181744 containerd[2097]: time="2026-01-17T00:22:14.165883972Z" level=info msg="shim disconnected" id=771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac namespace=k8s.io Jan 17 00:22:14.181744 containerd[2097]: time="2026-01-17T00:22:14.181743829Z" level=warning msg="cleaning up after shim disconnected" id=771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac namespace=k8s.io Jan 17 00:22:14.181979 containerd[2097]: time="2026-01-17T00:22:14.181759978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:14.799507 containerd[2097]: time="2026-01-17T00:22:14.799443793Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:22:14.812337 containerd[2097]: time="2026-01-17T00:22:14.812195971Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\"" Jan 17 00:22:14.812898 containerd[2097]: time="2026-01-17T00:22:14.812818125Z" level=info msg="StartContainer for \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\"" Jan 17 00:22:14.892010 containerd[2097]: time="2026-01-17T00:22:14.891961535Z" level=info msg="StartContainer for \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\" returns successfully" Jan 17 00:22:14.904009 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:22:14.904446 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:14.904532 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:22:14.912987 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:22:14.943309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe-rootfs.mount: Deactivated successfully. Jan 17 00:22:14.951254 containerd[2097]: time="2026-01-17T00:22:14.950949449Z" level=info msg="shim disconnected" id=193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe namespace=k8s.io Jan 17 00:22:14.951254 containerd[2097]: time="2026-01-17T00:22:14.950995441Z" level=warning msg="cleaning up after shim disconnected" id=193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe namespace=k8s.io Jan 17 00:22:14.951254 containerd[2097]: time="2026-01-17T00:22:14.951004117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:14.954750 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:22:15.394097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255535432.mount: Deactivated successfully. Jan 17 00:22:15.814015 containerd[2097]: time="2026-01-17T00:22:15.813499166Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:22:15.846413 containerd[2097]: time="2026-01-17T00:22:15.845937196Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\"" Jan 17 00:22:15.848369 containerd[2097]: time="2026-01-17T00:22:15.848163876Z" level=info msg="StartContainer for \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\"" Jan 17 00:22:15.890131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611345744.mount: Deactivated successfully. Jan 17 00:22:15.967678 containerd[2097]: time="2026-01-17T00:22:15.967637308Z" level=info msg="StartContainer for \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\" returns successfully" Jan 17 00:22:16.010404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc-rootfs.mount: Deactivated successfully. Jan 17 00:22:16.231927 containerd[2097]: time="2026-01-17T00:22:16.231659747Z" level=info msg="shim disconnected" id=4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc namespace=k8s.io Jan 17 00:22:16.231927 containerd[2097]: time="2026-01-17T00:22:16.231706886Z" level=warning msg="cleaning up after shim disconnected" id=4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc namespace=k8s.io Jan 17 00:22:16.231927 containerd[2097]: time="2026-01-17T00:22:16.231715086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:16.376940 containerd[2097]: time="2026-01-17T00:22:16.376897404Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:16.377988 containerd[2097]: time="2026-01-17T00:22:16.377847754Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:22:16.379845 containerd[2097]: time="2026-01-17T00:22:16.378983277Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:22:16.380303 containerd[2097]: time="2026-01-17T00:22:16.380273490Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.581412175s" Jan 17 00:22:16.380351 containerd[2097]: time="2026-01-17T00:22:16.380309560Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:22:16.383700 containerd[2097]: time="2026-01-17T00:22:16.383671317Z" level=info msg="CreateContainer within sandbox \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:22:16.398744 containerd[2097]: time="2026-01-17T00:22:16.398699761Z" level=info msg="CreateContainer within sandbox \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\"" Jan 17 00:22:16.400688 containerd[2097]: time="2026-01-17T00:22:16.399526705Z" level=info msg="StartContainer for \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\"" Jan 17 00:22:16.471600 containerd[2097]: time="2026-01-17T00:22:16.471540715Z" level=info msg="StartContainer for \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\" returns successfully" Jan 17 00:22:16.827247 containerd[2097]: time="2026-01-17T00:22:16.827197498Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:22:16.894528 containerd[2097]: time="2026-01-17T00:22:16.894483732Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\"" Jan 17 00:22:16.898247 containerd[2097]: time="2026-01-17T00:22:16.896246746Z" level=info msg="StartContainer for \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\"" Jan 17 00:22:16.999680 containerd[2097]: time="2026-01-17T00:22:16.995827684Z" level=info msg="StartContainer for \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\" returns successfully" Jan 17 00:22:17.025377 kubelet[3480]: I0117 00:22:17.025304 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-l7mgl" podStartSLOduration=1.559275964 podStartE2EDuration="12.025272611s" podCreationTimestamp="2026-01-17 00:22:05 +0000 UTC" firstStartedPulling="2026-01-17 00:22:05.915268332 +0000 UTC m=+6.323545910" lastFinishedPulling="2026-01-17 00:22:16.38126498 +0000 UTC m=+16.789542557" observedRunningTime="2026-01-17 00:22:16.839217035 +0000 UTC m=+17.247494651" watchObservedRunningTime="2026-01-17 00:22:17.025272611 +0000 UTC m=+17.433550208" Jan 17 00:22:17.034323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5-rootfs.mount: Deactivated successfully. Jan 17 00:22:17.046978 containerd[2097]: time="2026-01-17T00:22:17.046902653Z" level=info msg="shim disconnected" id=4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5 namespace=k8s.io Jan 17 00:22:17.049798 containerd[2097]: time="2026-01-17T00:22:17.047270109Z" level=warning msg="cleaning up after shim disconnected" id=4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5 namespace=k8s.io Jan 17 00:22:17.049798 containerd[2097]: time="2026-01-17T00:22:17.047294153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:22:17.098646 containerd[2097]: time="2026-01-17T00:22:17.098314349Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:22:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:22:17.828322 containerd[2097]: time="2026-01-17T00:22:17.828194464Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:22:17.851981 containerd[2097]: time="2026-01-17T00:22:17.851929565Z" level=info msg="CreateContainer within sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\"" Jan 17 00:22:17.853353 containerd[2097]: time="2026-01-17T00:22:17.853318678Z" level=info msg="StartContainer for \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\"" Jan 17 00:22:17.922437 containerd[2097]: time="2026-01-17T00:22:17.922383604Z" level=info msg="StartContainer for \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\" returns successfully" Jan 17 00:22:18.166463 kubelet[3480]: I0117 00:22:18.165491 3480 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:22:18.417470 kubelet[3480]: I0117 00:22:18.417367 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84r6n\" (UniqueName: \"kubernetes.io/projected/a589cd35-9085-48cf-a6e6-a2c14a111637-kube-api-access-84r6n\") pod \"coredns-668d6bf9bc-99kss\" (UID: \"a589cd35-9085-48cf-a6e6-a2c14a111637\") " pod="kube-system/coredns-668d6bf9bc-99kss" Jan 17 00:22:18.417470 kubelet[3480]: I0117 00:22:18.417409 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1b52515-8c52-41ef-88ed-0b3b9d583b60-config-volume\") pod \"coredns-668d6bf9bc-w4ln8\" (UID: \"e1b52515-8c52-41ef-88ed-0b3b9d583b60\") " pod="kube-system/coredns-668d6bf9bc-w4ln8" Jan 17 00:22:18.417470 kubelet[3480]: I0117 00:22:18.417428 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57jpr\" (UniqueName: \"kubernetes.io/projected/e1b52515-8c52-41ef-88ed-0b3b9d583b60-kube-api-access-57jpr\") pod \"coredns-668d6bf9bc-w4ln8\" (UID: \"e1b52515-8c52-41ef-88ed-0b3b9d583b60\") " pod="kube-system/coredns-668d6bf9bc-w4ln8" Jan 17 00:22:18.417470 kubelet[3480]: I0117 00:22:18.417451 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a589cd35-9085-48cf-a6e6-a2c14a111637-config-volume\") pod \"coredns-668d6bf9bc-99kss\" (UID: \"a589cd35-9085-48cf-a6e6-a2c14a111637\") " pod="kube-system/coredns-668d6bf9bc-99kss" Jan 17 00:22:18.611958 containerd[2097]: time="2026-01-17T00:22:18.611917707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w4ln8,Uid:e1b52515-8c52-41ef-88ed-0b3b9d583b60,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:18.614181 containerd[2097]: time="2026-01-17T00:22:18.611999218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-99kss,Uid:a589cd35-9085-48cf-a6e6-a2c14a111637,Namespace:kube-system,Attempt:0,}" Jan 17 00:22:18.860751 kubelet[3480]: I0117 00:22:18.858347 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pbkpm" podStartSLOduration=6.671219408 podStartE2EDuration="14.858320979s" podCreationTimestamp="2026-01-17 00:22:04 +0000 UTC" firstStartedPulling="2026-01-17 00:22:05.611102266 +0000 UTC m=+6.019379855" lastFinishedPulling="2026-01-17 00:22:13.79820385 +0000 UTC m=+14.206481426" observedRunningTime="2026-01-17 00:22:18.858104273 +0000 UTC m=+19.266381870" watchObservedRunningTime="2026-01-17 00:22:18.858320979 +0000 UTC m=+19.266598575" Jan 17 00:22:20.785042 (udev-worker)[4305]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:20.785310 systemd-networkd[1652]: cilium_host: Link UP Jan 17 00:22:20.786085 systemd-networkd[1652]: cilium_net: Link UP Jan 17 00:22:20.786513 (udev-worker)[4273]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:20.786921 systemd-networkd[1652]: cilium_net: Gained carrier Jan 17 00:22:20.787876 systemd-networkd[1652]: cilium_host: Gained carrier Jan 17 00:22:20.945877 systemd-networkd[1652]: cilium_net: Gained IPv6LL Jan 17 00:22:21.142862 systemd-networkd[1652]: cilium_vxlan: Link UP Jan 17 00:22:21.142869 systemd-networkd[1652]: cilium_vxlan: Gained carrier Jan 17 00:22:21.449820 systemd-networkd[1652]: cilium_host: Gained IPv6LL Jan 17 00:22:21.762326 kernel: NET: Registered PF_ALG protocol family Jan 17 00:22:22.583332 systemd-networkd[1652]: lxc_health: Link UP Jan 17 00:22:22.589174 systemd-networkd[1652]: lxc_health: Gained carrier Jan 17 00:22:22.728179 (udev-worker)[4311]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:22:22.729422 systemd-networkd[1652]: lxc92fd65a07269: Link UP Jan 17 00:22:22.735675 kernel: eth0: renamed from tmp70d50 Jan 17 00:22:22.741866 systemd-networkd[1652]: lxc92fd65a07269: Gained carrier Jan 17 00:22:22.787203 systemd-networkd[1652]: lxc267fdfe71920: Link UP Jan 17 00:22:22.794762 kernel: eth0: renamed from tmpe51fb Jan 17 00:22:22.798646 systemd-networkd[1652]: lxc267fdfe71920: Gained carrier Jan 17 00:22:23.177882 systemd-networkd[1652]: cilium_vxlan: Gained IPv6LL Jan 17 00:22:24.265865 systemd-networkd[1652]: lxc92fd65a07269: Gained IPv6LL Jan 17 00:22:24.457737 systemd-networkd[1652]: lxc_health: Gained IPv6LL Jan 17 00:22:24.777753 systemd-networkd[1652]: lxc267fdfe71920: Gained IPv6LL Jan 17 00:22:27.303588 containerd[2097]: time="2026-01-17T00:22:27.301381168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:27.303588 containerd[2097]: time="2026-01-17T00:22:27.301517474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:27.303588 containerd[2097]: time="2026-01-17T00:22:27.301734452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:27.303588 containerd[2097]: time="2026-01-17T00:22:27.302064366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:27.344848 containerd[2097]: time="2026-01-17T00:22:27.343413367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:22:27.344848 containerd[2097]: time="2026-01-17T00:22:27.343512062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:22:27.344848 containerd[2097]: time="2026-01-17T00:22:27.343536839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:27.344848 containerd[2097]: time="2026-01-17T00:22:27.343690509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:22:27.432415 systemd[1]: run-containerd-runc-k8s.io-e51fb7415b6873490a74cfb52bb17cab8f8ca36a9fec9c844d36ede56eca8c99-runc.rzL485.mount: Deactivated successfully. Jan 17 00:22:27.506539 containerd[2097]: time="2026-01-17T00:22:27.506495429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-99kss,Uid:a589cd35-9085-48cf-a6e6-a2c14a111637,Namespace:kube-system,Attempt:0,} returns sandbox id \"70d508c9cc3446d04db7b15e941a07671db71a0dbe35b10f1ce562198a330e87\"" Jan 17 00:22:27.512603 containerd[2097]: time="2026-01-17T00:22:27.512355092Z" level=info msg="CreateContainer within sandbox \"70d508c9cc3446d04db7b15e941a07671db71a0dbe35b10f1ce562198a330e87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:22:27.557360 containerd[2097]: time="2026-01-17T00:22:27.556516510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w4ln8,Uid:e1b52515-8c52-41ef-88ed-0b3b9d583b60,Namespace:kube-system,Attempt:0,} returns sandbox id \"e51fb7415b6873490a74cfb52bb17cab8f8ca36a9fec9c844d36ede56eca8c99\"" Jan 17 00:22:27.560024 containerd[2097]: time="2026-01-17T00:22:27.559986297Z" level=info msg="CreateContainer within sandbox \"e51fb7415b6873490a74cfb52bb17cab8f8ca36a9fec9c844d36ede56eca8c99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:22:27.673412 containerd[2097]: time="2026-01-17T00:22:27.672460874Z" level=info msg="CreateContainer within sandbox \"e51fb7415b6873490a74cfb52bb17cab8f8ca36a9fec9c844d36ede56eca8c99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2fa0a86159555141f41d743582b62b1f2e971e051403fb94cd42d5ee7896684\"" Jan 17 00:22:27.674733 containerd[2097]: time="2026-01-17T00:22:27.673717864Z" level=info msg="StartContainer for \"d2fa0a86159555141f41d743582b62b1f2e971e051403fb94cd42d5ee7896684\"" Jan 17 00:22:27.680308 containerd[2097]: time="2026-01-17T00:22:27.680269919Z" level=info msg="CreateContainer within sandbox \"70d508c9cc3446d04db7b15e941a07671db71a0dbe35b10f1ce562198a330e87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3390b3e699666d71de3e3b12879d59e34f3d4d8bd9e686c36c40025f14b0a57e\"" Jan 17 00:22:27.682672 containerd[2097]: time="2026-01-17T00:22:27.682638818Z" level=info msg="StartContainer for \"3390b3e699666d71de3e3b12879d59e34f3d4d8bd9e686c36c40025f14b0a57e\"" Jan 17 00:22:27.771879 containerd[2097]: time="2026-01-17T00:22:27.771841099Z" level=info msg="StartContainer for \"d2fa0a86159555141f41d743582b62b1f2e971e051403fb94cd42d5ee7896684\" returns successfully" Jan 17 00:22:27.776357 containerd[2097]: time="2026-01-17T00:22:27.776315070Z" level=info msg="StartContainer for \"3390b3e699666d71de3e3b12879d59e34f3d4d8bd9e686c36c40025f14b0a57e\" returns successfully" Jan 17 00:22:27.871100 kubelet[3480]: I0117 00:22:27.870129 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-99kss" podStartSLOduration=22.870111305000002 podStartE2EDuration="22.870111305s" podCreationTimestamp="2026-01-17 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:27.869568632 +0000 UTC m=+28.277846220" watchObservedRunningTime="2026-01-17 00:22:27.870111305 +0000 UTC m=+28.278388901" Jan 17 00:22:28.310634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423487856.mount: Deactivated successfully. Jan 17 00:22:28.872012 kubelet[3480]: I0117 00:22:28.871967 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w4ln8" podStartSLOduration=23.871950826 podStartE2EDuration="23.871950826s" podCreationTimestamp="2026-01-17 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:22:27.880587925 +0000 UTC m=+28.288865517" watchObservedRunningTime="2026-01-17 00:22:28.871950826 +0000 UTC m=+29.280228422" Jan 17 00:22:30.338115 ntpd[2052]: Listen normally on 6 cilium_host 192.168.0.245:123 Jan 17 00:22:30.338192 ntpd[2052]: Listen normally on 7 cilium_net [fe80::e87d:9dff:fec3:7d5c%4]:123 Jan 17 00:22:30.338580 ntpd[2052]: 17 Jan 00:22:30 ntpd[2052]: Listen normally on 6 cilium_host 192.168.0.245:123 Jan 17 00:22:30.338580 ntpd[2052]: 17 Jan 00:22:30 ntpd[2052]: Listen normally on 7 cilium_net [fe80::e87d:9dff:fec3:7d5c%4]:123 Jan 17 00:22:30.338580 ntpd[2052]: 17 Jan 00:22:30 ntpd[2052]: Listen normally on 8 cilium_host [fe80::8009:11ff:fe11:f124%5]:123 Jan 17 00:22:30.338580 ntpd[2052]: 17 Jan 00:22:30 ntpd[2052]: Listen normally on 9 cilium_vxlan [fe80::f838:73ff:fe90:1a2f%6]:123 Jan 17 00:22:30.338580 ntpd[2052]: 17 Jan 00:22:30 ntpd[2052]: Listen normally on 10 lxc_health [fe80::cc27:32ff:fe92:5acc%8]:123 Jan 17 00:22:30.338580 ntpd[2052]: 17 Jan 00:22:30 ntpd[2052]: Listen normally on 11 lxc92fd65a07269 [fe80::9879:dff:fee1:3cee%10]:123 Jan 17 00:22:30.338580 ntpd[2052]: 17 Jan 00:22:30 ntpd[2052]: Listen normally on 12 lxc267fdfe71920 [fe80::a4e1:aaff:fee0:67f9%12]:123 Jan 17 00:22:30.338236 ntpd[2052]: Listen normally on 8 cilium_host [fe80::8009:11ff:fe11:f124%5]:123 Jan 17 00:22:30.338266 ntpd[2052]: Listen normally on 9 cilium_vxlan [fe80::f838:73ff:fe90:1a2f%6]:123 Jan 17 00:22:30.338305 ntpd[2052]: Listen normally on 10 lxc_health [fe80::cc27:32ff:fe92:5acc%8]:123 Jan 17 00:22:30.338335 ntpd[2052]: Listen normally on 11 lxc92fd65a07269 [fe80::9879:dff:fee1:3cee%10]:123 Jan 17 00:22:30.338364 ntpd[2052]: Listen normally on 12 lxc267fdfe71920 [fe80::a4e1:aaff:fee0:67f9%12]:123 Jan 17 00:22:38.688929 systemd[1]: Started sshd@9-172.31.17.137:22-4.153.228.146:57410.service - OpenSSH per-connection server daemon (4.153.228.146:57410). Jan 17 00:22:39.236551 sshd[4843]: Accepted publickey for core from 4.153.228.146 port 57410 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:39.238428 sshd[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:39.249905 systemd-logind[2077]: New session 10 of user core. Jan 17 00:22:39.256855 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:22:40.230336 sshd[4843]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:40.233260 systemd[1]: sshd@9-172.31.17.137:22-4.153.228.146:57410.service: Deactivated successfully. Jan 17 00:22:40.236996 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:22:40.239079 systemd-logind[2077]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:22:40.240213 systemd-logind[2077]: Removed session 10. Jan 17 00:22:45.308310 systemd[1]: Started sshd@10-172.31.17.137:22-4.153.228.146:57758.service - OpenSSH per-connection server daemon (4.153.228.146:57758). Jan 17 00:22:45.803602 sshd[4859]: Accepted publickey for core from 4.153.228.146 port 57758 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:45.805014 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:45.809655 systemd-logind[2077]: New session 11 of user core. Jan 17 00:22:45.814845 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:22:46.240744 sshd[4859]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:46.243651 systemd[1]: sshd@10-172.31.17.137:22-4.153.228.146:57758.service: Deactivated successfully. Jan 17 00:22:46.247077 systemd-logind[2077]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:22:46.248907 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:22:46.250280 systemd-logind[2077]: Removed session 11. Jan 17 00:22:51.325834 systemd[1]: Started sshd@11-172.31.17.137:22-4.153.228.146:57764.service - OpenSSH per-connection server daemon (4.153.228.146:57764). Jan 17 00:22:51.809774 sshd[4875]: Accepted publickey for core from 4.153.228.146 port 57764 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:51.811533 sshd[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:51.816352 systemd-logind[2077]: New session 12 of user core. Jan 17 00:22:51.819899 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:22:52.226096 sshd[4875]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:52.229230 systemd[1]: sshd@11-172.31.17.137:22-4.153.228.146:57764.service: Deactivated successfully. Jan 17 00:22:52.233965 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:22:52.234627 systemd-logind[2077]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:22:52.236538 systemd-logind[2077]: Removed session 12. Jan 17 00:22:52.309404 systemd[1]: Started sshd@12-172.31.17.137:22-4.153.228.146:57770.service - OpenSSH per-connection server daemon (4.153.228.146:57770). Jan 17 00:22:52.801400 sshd[4891]: Accepted publickey for core from 4.153.228.146 port 57770 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:52.802897 sshd[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:52.811277 systemd-logind[2077]: New session 13 of user core. Jan 17 00:22:52.816949 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:22:53.337421 sshd[4891]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:53.341931 systemd[1]: sshd@12-172.31.17.137:22-4.153.228.146:57770.service: Deactivated successfully. Jan 17 00:22:53.348109 systemd-logind[2077]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:22:53.350498 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:22:53.351768 systemd-logind[2077]: Removed session 13. Jan 17 00:22:53.433046 systemd[1]: Started sshd@13-172.31.17.137:22-4.153.228.146:57778.service - OpenSSH per-connection server daemon (4.153.228.146:57778). Jan 17 00:22:53.969696 sshd[4903]: Accepted publickey for core from 4.153.228.146 port 57778 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:53.971211 sshd[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:22:53.976238 systemd-logind[2077]: New session 14 of user core. Jan 17 00:22:53.984967 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:22:54.441214 sshd[4903]: pam_unix(sshd:session): session closed for user core Jan 17 00:22:54.444772 systemd-logind[2077]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:22:54.445734 systemd[1]: sshd@13-172.31.17.137:22-4.153.228.146:57778.service: Deactivated successfully. Jan 17 00:22:54.449646 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:22:54.451180 systemd-logind[2077]: Removed session 14. Jan 17 00:22:59.516158 systemd[1]: Started sshd@14-172.31.17.137:22-4.153.228.146:39560.service - OpenSSH per-connection server daemon (4.153.228.146:39560). Jan 17 00:22:59.993469 sshd[4918]: Accepted publickey for core from 4.153.228.146 port 39560 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:22:59.995167 sshd[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:00.008837 systemd-logind[2077]: New session 15 of user core. Jan 17 00:23:00.015500 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:23:00.402671 sshd[4918]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:00.409889 systemd[1]: sshd@14-172.31.17.137:22-4.153.228.146:39560.service: Deactivated successfully. Jan 17 00:23:00.413869 systemd-logind[2077]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:23:00.414942 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:23:00.415933 systemd-logind[2077]: Removed session 15. Jan 17 00:23:05.486171 systemd[1]: Started sshd@15-172.31.17.137:22-4.153.228.146:55076.service - OpenSSH per-connection server daemon (4.153.228.146:55076). Jan 17 00:23:05.973714 sshd[4934]: Accepted publickey for core from 4.153.228.146 port 55076 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:05.975206 sshd[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:05.980200 systemd-logind[2077]: New session 16 of user core. Jan 17 00:23:05.984846 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:23:06.389133 sshd[4934]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:06.392433 systemd[1]: sshd@15-172.31.17.137:22-4.153.228.146:55076.service: Deactivated successfully. Jan 17 00:23:06.397059 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:23:06.398001 systemd-logind[2077]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:23:06.399177 systemd-logind[2077]: Removed session 16. Jan 17 00:23:06.485850 systemd[1]: Started sshd@16-172.31.17.137:22-4.153.228.146:55086.service - OpenSSH per-connection server daemon (4.153.228.146:55086). Jan 17 00:23:07.011093 sshd[4948]: Accepted publickey for core from 4.153.228.146 port 55086 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:07.012525 sshd[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:07.017139 systemd-logind[2077]: New session 17 of user core. Jan 17 00:23:07.021859 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:23:11.599030 sshd[4948]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:11.601987 systemd[1]: sshd@16-172.31.17.137:22-4.153.228.146:55086.service: Deactivated successfully. Jan 17 00:23:11.606426 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:23:11.607714 systemd-logind[2077]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:23:11.608636 systemd-logind[2077]: Removed session 17. Jan 17 00:23:11.643840 systemd[1]: Started sshd@17-172.31.17.137:22-4.153.228.146:55094.service - OpenSSH per-connection server daemon (4.153.228.146:55094). Jan 17 00:23:12.130948 sshd[4963]: Accepted publickey for core from 4.153.228.146 port 55094 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:12.132531 sshd[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:12.138902 systemd-logind[2077]: New session 18 of user core. Jan 17 00:23:12.144314 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:23:13.368493 sshd[4963]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:13.372267 systemd[1]: sshd@17-172.31.17.137:22-4.153.228.146:55094.service: Deactivated successfully. Jan 17 00:23:13.378622 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:23:13.379656 systemd-logind[2077]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:23:13.380832 systemd-logind[2077]: Removed session 18. Jan 17 00:23:13.451284 systemd[1]: Started sshd@18-172.31.17.137:22-4.153.228.146:55106.service - OpenSSH per-connection server daemon (4.153.228.146:55106). Jan 17 00:23:13.928307 sshd[4981]: Accepted publickey for core from 4.153.228.146 port 55106 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:13.929907 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:13.935063 systemd-logind[2077]: New session 19 of user core. Jan 17 00:23:13.940924 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:23:14.494169 sshd[4981]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:14.497371 systemd[1]: sshd@18-172.31.17.137:22-4.153.228.146:55106.service: Deactivated successfully. Jan 17 00:23:14.501318 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:23:14.502385 systemd-logind[2077]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:23:14.504195 systemd-logind[2077]: Removed session 19. Jan 17 00:23:14.589912 systemd[1]: Started sshd@19-172.31.17.137:22-4.153.228.146:52466.service - OpenSSH per-connection server daemon (4.153.228.146:52466). Jan 17 00:23:15.113220 sshd[4992]: Accepted publickey for core from 4.153.228.146 port 52466 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:15.114904 sshd[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:15.120510 systemd-logind[2077]: New session 20 of user core. Jan 17 00:23:15.122960 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:23:15.549864 sshd[4992]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:15.552842 systemd[1]: sshd@19-172.31.17.137:22-4.153.228.146:52466.service: Deactivated successfully. Jan 17 00:23:15.556683 systemd-logind[2077]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:23:15.557286 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:23:15.559056 systemd-logind[2077]: Removed session 20. Jan 17 00:23:20.639864 systemd[1]: Started sshd@20-172.31.17.137:22-4.153.228.146:52474.service - OpenSSH per-connection server daemon (4.153.228.146:52474). Jan 17 00:23:21.154151 sshd[5008]: Accepted publickey for core from 4.153.228.146 port 52474 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:21.154811 sshd[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:21.159465 systemd-logind[2077]: New session 21 of user core. Jan 17 00:23:21.169957 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:23:21.600903 sshd[5008]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:21.604102 systemd[1]: sshd@20-172.31.17.137:22-4.153.228.146:52474.service: Deactivated successfully. Jan 17 00:23:21.607857 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:23:21.610079 systemd-logind[2077]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:23:21.611376 systemd-logind[2077]: Removed session 21. Jan 17 00:23:26.679009 systemd[1]: Started sshd@21-172.31.17.137:22-4.153.228.146:57490.service - OpenSSH per-connection server daemon (4.153.228.146:57490). Jan 17 00:23:27.165943 sshd[5021]: Accepted publickey for core from 4.153.228.146 port 57490 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:27.167645 sshd[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:27.172725 systemd-logind[2077]: New session 22 of user core. Jan 17 00:23:27.179915 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:23:27.585408 sshd[5021]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:27.590044 systemd[1]: sshd@21-172.31.17.137:22-4.153.228.146:57490.service: Deactivated successfully. Jan 17 00:23:27.595423 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:23:27.596463 systemd-logind[2077]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:23:27.597608 systemd-logind[2077]: Removed session 22. Jan 17 00:23:32.679865 systemd[1]: Started sshd@22-172.31.17.137:22-4.153.228.146:57496.service - OpenSSH per-connection server daemon (4.153.228.146:57496). Jan 17 00:23:33.195066 sshd[5036]: Accepted publickey for core from 4.153.228.146 port 57496 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:33.196613 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:33.201944 systemd-logind[2077]: New session 23 of user core. Jan 17 00:23:33.208876 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:23:33.633450 sshd[5036]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:33.636454 systemd[1]: sshd@22-172.31.17.137:22-4.153.228.146:57496.service: Deactivated successfully. Jan 17 00:23:33.639885 systemd-logind[2077]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:23:33.641797 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:23:33.643018 systemd-logind[2077]: Removed session 23. Jan 17 00:23:33.712391 systemd[1]: Started sshd@23-172.31.17.137:22-4.153.228.146:57504.service - OpenSSH per-connection server daemon (4.153.228.146:57504). Jan 17 00:23:34.207448 sshd[5049]: Accepted publickey for core from 4.153.228.146 port 57504 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:34.209047 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:34.214481 systemd-logind[2077]: New session 24 of user core. Jan 17 00:23:34.221898 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:23:36.683950 containerd[2097]: time="2026-01-17T00:23:36.683908380Z" level=info msg="StopContainer for \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\" with timeout 30 (s)" Jan 17 00:23:36.686092 containerd[2097]: time="2026-01-17T00:23:36.686057761Z" level=info msg="Stop container \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\" with signal terminated" Jan 17 00:23:36.707420 containerd[2097]: time="2026-01-17T00:23:36.707326145Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:23:36.737703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68-rootfs.mount: Deactivated successfully. Jan 17 00:23:36.744570 containerd[2097]: time="2026-01-17T00:23:36.743433472Z" level=info msg="shim disconnected" id=81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68 namespace=k8s.io Jan 17 00:23:36.744570 containerd[2097]: time="2026-01-17T00:23:36.743587441Z" level=warning msg="cleaning up after shim disconnected" id=81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68 namespace=k8s.io Jan 17 00:23:36.744570 containerd[2097]: time="2026-01-17T00:23:36.743606322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:36.747692 containerd[2097]: time="2026-01-17T00:23:36.747641615Z" level=info msg="StopContainer for \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\" with timeout 2 (s)" Jan 17 00:23:36.749337 containerd[2097]: time="2026-01-17T00:23:36.749308370Z" level=info msg="Stop container \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\" with signal terminated" Jan 17 00:23:36.766750 systemd-networkd[1652]: lxc_health: Link DOWN Jan 17 00:23:36.767271 systemd-networkd[1652]: lxc_health: Lost carrier Jan 17 00:23:36.783999 containerd[2097]: time="2026-01-17T00:23:36.783962540Z" level=info msg="StopContainer for \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\" returns successfully" Jan 17 00:23:36.799119 containerd[2097]: time="2026-01-17T00:23:36.799076768Z" level=info msg="StopPodSandbox for \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\"" Jan 17 00:23:36.799306 containerd[2097]: time="2026-01-17T00:23:36.799134096Z" level=info msg="Container to stop \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:23:36.805280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951-shm.mount: Deactivated successfully. Jan 17 00:23:36.840141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce-rootfs.mount: Deactivated successfully. Jan 17 00:23:36.852828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951-rootfs.mount: Deactivated successfully. Jan 17 00:23:36.854882 containerd[2097]: time="2026-01-17T00:23:36.854655483Z" level=info msg="shim disconnected" id=1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce namespace=k8s.io Jan 17 00:23:36.854882 containerd[2097]: time="2026-01-17T00:23:36.854716370Z" level=warning msg="cleaning up after shim disconnected" id=1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce namespace=k8s.io Jan 17 00:23:36.854882 containerd[2097]: time="2026-01-17T00:23:36.854729804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:36.856791 containerd[2097]: time="2026-01-17T00:23:36.855506558Z" level=info msg="shim disconnected" id=3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951 namespace=k8s.io Jan 17 00:23:36.856791 containerd[2097]: time="2026-01-17T00:23:36.855630101Z" level=warning msg="cleaning up after shim disconnected" id=3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951 namespace=k8s.io Jan 17 00:23:36.856791 containerd[2097]: time="2026-01-17T00:23:36.855643697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:36.882786 containerd[2097]: time="2026-01-17T00:23:36.882745922Z" level=info msg="TearDown network for sandbox \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\" successfully" Jan 17 00:23:36.882786 containerd[2097]: time="2026-01-17T00:23:36.882780094Z" level=info msg="StopPodSandbox for \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\" returns successfully" Jan 17 00:23:36.884924 containerd[2097]: time="2026-01-17T00:23:36.884886926Z" level=info msg="StopContainer for \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\" returns successfully" Jan 17 00:23:36.885496 containerd[2097]: time="2026-01-17T00:23:36.885448551Z" level=info msg="StopPodSandbox for \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\"" Jan 17 00:23:36.885727 containerd[2097]: time="2026-01-17T00:23:36.885703870Z" level=info msg="Container to stop \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:23:36.885816 containerd[2097]: time="2026-01-17T00:23:36.885730465Z" level=info msg="Container to stop \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:23:36.885816 containerd[2097]: time="2026-01-17T00:23:36.885777053Z" level=info msg="Container to stop \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:23:36.885816 containerd[2097]: time="2026-01-17T00:23:36.885792366Z" level=info msg="Container to stop \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:23:36.885816 containerd[2097]: time="2026-01-17T00:23:36.885805779Z" level=info msg="Container to stop \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:23:36.917344 kubelet[3480]: I0117 00:23:36.916998 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdk8x\" (UniqueName: \"kubernetes.io/projected/ac47a62d-0ad5-4599-bdfb-4a502444a7a2-kube-api-access-gdk8x\") pod \"ac47a62d-0ad5-4599-bdfb-4a502444a7a2\" (UID: \"ac47a62d-0ad5-4599-bdfb-4a502444a7a2\") " Jan 17 00:23:36.917344 kubelet[3480]: I0117 00:23:36.917053 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac47a62d-0ad5-4599-bdfb-4a502444a7a2-cilium-config-path\") pod \"ac47a62d-0ad5-4599-bdfb-4a502444a7a2\" (UID: \"ac47a62d-0ad5-4599-bdfb-4a502444a7a2\") " Jan 17 00:23:36.944580 containerd[2097]: time="2026-01-17T00:23:36.940930846Z" level=info msg="shim disconnected" id=7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780 namespace=k8s.io Jan 17 00:23:36.944580 containerd[2097]: time="2026-01-17T00:23:36.940985789Z" level=warning msg="cleaning up after shim disconnected" id=7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780 namespace=k8s.io Jan 17 00:23:36.944580 containerd[2097]: time="2026-01-17T00:23:36.940998007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:36.951409 kubelet[3480]: I0117 00:23:36.949608 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac47a62d-0ad5-4599-bdfb-4a502444a7a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ac47a62d-0ad5-4599-bdfb-4a502444a7a2" (UID: "ac47a62d-0ad5-4599-bdfb-4a502444a7a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:23:36.951641 kubelet[3480]: I0117 00:23:36.951449 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac47a62d-0ad5-4599-bdfb-4a502444a7a2-kube-api-access-gdk8x" (OuterVolumeSpecName: "kube-api-access-gdk8x") pod "ac47a62d-0ad5-4599-bdfb-4a502444a7a2" (UID: "ac47a62d-0ad5-4599-bdfb-4a502444a7a2"). InnerVolumeSpecName "kube-api-access-gdk8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:23:36.962418 containerd[2097]: time="2026-01-17T00:23:36.962376131Z" level=info msg="TearDown network for sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" successfully" Jan 17 00:23:36.962776 containerd[2097]: time="2026-01-17T00:23:36.962751014Z" level=info msg="StopPodSandbox for \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" returns successfully" Jan 17 00:23:37.019505 kubelet[3480]: I0117 00:23:37.019454 3480 scope.go:117] "RemoveContainer" containerID="1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce" Jan 17 00:23:37.022244 kubelet[3480]: I0117 00:23:37.022213 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-host-proc-sys-kernel\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022244 kubelet[3480]: I0117 00:23:37.022247 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-hostproc\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022376 kubelet[3480]: I0117 00:23:37.022262 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cilium-run\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022376 kubelet[3480]: I0117 00:23:37.022277 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-xtables-lock\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022376 kubelet[3480]: I0117 00:23:37.022291 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-etc-cni-netd\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022376 kubelet[3480]: I0117 00:23:37.022306 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cilium-cgroup\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022376 kubelet[3480]: I0117 00:23:37.022330 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-hubble-tls\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022376 kubelet[3480]: I0117 00:23:37.022345 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-host-proc-sys-net\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022627 kubelet[3480]: I0117 00:23:37.022362 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69f244c0-5566-4f41-a65d-970d7e108157-clustermesh-secrets\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022627 kubelet[3480]: I0117 00:23:37.022381 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f244c0-5566-4f41-a65d-970d7e108157-cilium-config-path\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022627 kubelet[3480]: I0117 00:23:37.022394 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-lib-modules\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022627 kubelet[3480]: I0117 00:23:37.022408 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-bpf-maps\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022627 kubelet[3480]: I0117 00:23:37.022424 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cni-path\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.022627 kubelet[3480]: I0117 00:23:37.022439 3480 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx9n4\" (UniqueName: \"kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-kube-api-access-xx9n4\") pod \"69f244c0-5566-4f41-a65d-970d7e108157\" (UID: \"69f244c0-5566-4f41-a65d-970d7e108157\") " Jan 17 00:23:37.023375 kubelet[3480]: I0117 00:23:37.022471 3480 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gdk8x\" (UniqueName: \"kubernetes.io/projected/ac47a62d-0ad5-4599-bdfb-4a502444a7a2-kube-api-access-gdk8x\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.023375 kubelet[3480]: I0117 00:23:37.022482 3480 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ac47a62d-0ad5-4599-bdfb-4a502444a7a2-cilium-config-path\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.025394 kubelet[3480]: I0117 00:23:37.025097 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-kube-api-access-xx9n4" (OuterVolumeSpecName: "kube-api-access-xx9n4") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "kube-api-access-xx9n4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:23:37.025394 kubelet[3480]: I0117 00:23:37.025161 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.025394 kubelet[3480]: I0117 00:23:37.025187 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-hostproc" (OuterVolumeSpecName: "hostproc") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.025394 kubelet[3480]: I0117 00:23:37.025202 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.025394 kubelet[3480]: I0117 00:23:37.025216 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.025599 kubelet[3480]: I0117 00:23:37.025229 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.025599 kubelet[3480]: I0117 00:23:37.025243 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.027869 kubelet[3480]: I0117 00:23:37.027828 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:23:37.027933 kubelet[3480]: I0117 00:23:37.027881 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.030738 kubelet[3480]: I0117 00:23:37.029858 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.031942 kubelet[3480]: I0117 00:23:37.030023 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f244c0-5566-4f41-a65d-970d7e108157-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:23:37.031942 kubelet[3480]: I0117 00:23:37.030040 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.031942 kubelet[3480]: I0117 00:23:37.030050 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cni-path" (OuterVolumeSpecName: "cni-path") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:23:37.069445 kubelet[3480]: I0117 00:23:37.049835 3480 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f244c0-5566-4f41-a65d-970d7e108157-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "69f244c0-5566-4f41-a65d-970d7e108157" (UID: "69f244c0-5566-4f41-a65d-970d7e108157"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:23:37.069445 kubelet[3480]: I0117 00:23:37.064939 3480 scope.go:117] "RemoveContainer" containerID="4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5" Jan 17 00:23:37.069445 kubelet[3480]: I0117 00:23:37.068907 3480 scope.go:117] "RemoveContainer" containerID="4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc" Jan 17 00:23:37.069596 containerd[2097]: time="2026-01-17T00:23:37.033207698Z" level=info msg="RemoveContainer for \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\"" Jan 17 00:23:37.069596 containerd[2097]: time="2026-01-17T00:23:37.038773383Z" level=info msg="RemoveContainer for \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\" returns successfully" Jan 17 00:23:37.069596 containerd[2097]: time="2026-01-17T00:23:37.065953795Z" level=info msg="RemoveContainer for \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\"" Jan 17 00:23:37.069596 containerd[2097]: time="2026-01-17T00:23:37.068730058Z" level=info msg="RemoveContainer for \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\" returns successfully" Jan 17 00:23:37.069836 containerd[2097]: time="2026-01-17T00:23:37.069809906Z" level=info msg="RemoveContainer for \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\"" Jan 17 00:23:37.077316 containerd[2097]: time="2026-01-17T00:23:37.077272652Z" level=info msg="RemoveContainer for \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\" returns successfully" Jan 17 00:23:37.077637 kubelet[3480]: I0117 00:23:37.077487 3480 scope.go:117] "RemoveContainer" containerID="193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe" Jan 17 00:23:37.078682 containerd[2097]: time="2026-01-17T00:23:37.078655792Z" level=info msg="RemoveContainer for \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\"" Jan 17 00:23:37.081332 containerd[2097]: time="2026-01-17T00:23:37.081304468Z" level=info msg="RemoveContainer for \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\" returns successfully" Jan 17 00:23:37.081899 kubelet[3480]: I0117 00:23:37.081795 3480 scope.go:117] "RemoveContainer" containerID="771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac" Jan 17 00:23:37.083141 containerd[2097]: time="2026-01-17T00:23:37.083113704Z" level=info msg="RemoveContainer for \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\"" Jan 17 00:23:37.085863 containerd[2097]: time="2026-01-17T00:23:37.085827445Z" level=info msg="RemoveContainer for \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\" returns successfully" Jan 17 00:23:37.086067 kubelet[3480]: I0117 00:23:37.086031 3480 scope.go:117] "RemoveContainer" containerID="1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce" Jan 17 00:23:37.094764 containerd[2097]: time="2026-01-17T00:23:37.086236852Z" level=error msg="ContainerStatus for \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\": not found" Jan 17 00:23:37.099168 kubelet[3480]: E0117 00:23:37.099063 3480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\": not found" containerID="1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce" Jan 17 00:23:37.120037 kubelet[3480]: I0117 00:23:37.105528 3480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce"} err="failed to get container status \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d72b0c458e875847f9189cbc8637710b27f7e0b565e90a236b442c914b777ce\": not found" Jan 17 00:23:37.120037 kubelet[3480]: I0117 00:23:37.120038 3480 scope.go:117] "RemoveContainer" containerID="4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5" Jan 17 00:23:37.120592 containerd[2097]: time="2026-01-17T00:23:37.120414458Z" level=error msg="ContainerStatus for \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\": not found" Jan 17 00:23:37.120693 kubelet[3480]: E0117 00:23:37.120596 3480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\": not found" containerID="4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5" Jan 17 00:23:37.120693 kubelet[3480]: I0117 00:23:37.120623 3480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5"} err="failed to get container status \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"4424e52606261590de9a029045f101c0eef7ede206c0968bf3f2395462aef5d5\": not found" Jan 17 00:23:37.120693 kubelet[3480]: I0117 00:23:37.120644 3480 scope.go:117] "RemoveContainer" containerID="4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc" Jan 17 00:23:37.120886 containerd[2097]: time="2026-01-17T00:23:37.120839234Z" level=error msg="ContainerStatus for \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\": not found" Jan 17 00:23:37.121027 kubelet[3480]: E0117 00:23:37.120962 3480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\": not found" containerID="4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc" Jan 17 00:23:37.121027 kubelet[3480]: I0117 00:23:37.120995 3480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc"} err="failed to get container status \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c082be1859cdff322a0c64567e5f3bcc380ac78e7f92099a1b38ad832fb80fc\": not found" Jan 17 00:23:37.121027 kubelet[3480]: I0117 00:23:37.121008 3480 scope.go:117] "RemoveContainer" containerID="193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe" Jan 17 00:23:37.121188 containerd[2097]: time="2026-01-17T00:23:37.121154424Z" level=error msg="ContainerStatus for \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\": not found" Jan 17 00:23:37.121277 kubelet[3480]: E0117 00:23:37.121253 3480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\": not found" containerID="193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe" Jan 17 00:23:37.121310 kubelet[3480]: I0117 00:23:37.121276 3480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe"} err="failed to get container status \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\": rpc error: code = NotFound desc = an error occurred when try to find container \"193bd0072aa4c27ba5481777c27d46bb66ccd98dd991b916fa9f16822b1a1cfe\": not found" Jan 17 00:23:37.121310 kubelet[3480]: I0117 00:23:37.121288 3480 scope.go:117] "RemoveContainer" containerID="771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac" Jan 17 00:23:37.121458 containerd[2097]: time="2026-01-17T00:23:37.121420814Z" level=error msg="ContainerStatus for \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\": not found" Jan 17 00:23:37.121535 kubelet[3480]: E0117 00:23:37.121514 3480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\": not found" containerID="771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac" Jan 17 00:23:37.121580 kubelet[3480]: I0117 00:23:37.121552 3480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac"} err="failed to get container status \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"771b0db0a00807fcba78f0b73a72f01b90557463c2150b6642365fb9a27e53ac\": not found" Jan 17 00:23:37.121580 kubelet[3480]: I0117 00:23:37.121575 3480 scope.go:117] "RemoveContainer" containerID="81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68" Jan 17 00:23:37.123101 containerd[2097]: time="2026-01-17T00:23:37.122653056Z" level=info msg="RemoveContainer for \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\"" Jan 17 00:23:37.123181 kubelet[3480]: I0117 00:23:37.122973 3480 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-etc-cni-netd\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123181 kubelet[3480]: I0117 00:23:37.122990 3480 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cilium-cgroup\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123181 kubelet[3480]: I0117 00:23:37.122998 3480 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-hubble-tls\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123181 kubelet[3480]: I0117 00:23:37.123006 3480 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-host-proc-sys-net\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123181 kubelet[3480]: I0117 00:23:37.123015 3480 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/69f244c0-5566-4f41-a65d-970d7e108157-clustermesh-secrets\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123181 kubelet[3480]: I0117 00:23:37.123023 3480 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/69f244c0-5566-4f41-a65d-970d7e108157-cilium-config-path\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123181 kubelet[3480]: I0117 00:23:37.123031 3480 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-lib-modules\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123181 kubelet[3480]: I0117 00:23:37.123038 3480 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-bpf-maps\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123409 kubelet[3480]: I0117 00:23:37.123045 3480 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cni-path\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123409 kubelet[3480]: I0117 00:23:37.123053 3480 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xx9n4\" (UniqueName: \"kubernetes.io/projected/69f244c0-5566-4f41-a65d-970d7e108157-kube-api-access-xx9n4\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123409 kubelet[3480]: I0117 00:23:37.123061 3480 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-host-proc-sys-kernel\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123409 kubelet[3480]: I0117 00:23:37.123069 3480 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-hostproc\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123409 kubelet[3480]: I0117 00:23:37.123076 3480 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-cilium-run\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.123409 kubelet[3480]: I0117 00:23:37.123084 3480 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/69f244c0-5566-4f41-a65d-970d7e108157-xtables-lock\") on node \"ip-172-31-17-137\" DevicePath \"\"" Jan 17 00:23:37.125451 containerd[2097]: time="2026-01-17T00:23:37.125423142Z" level=info msg="RemoveContainer for \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\" returns successfully" Jan 17 00:23:37.125643 kubelet[3480]: I0117 00:23:37.125593 3480 scope.go:117] "RemoveContainer" containerID="81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68" Jan 17 00:23:37.125890 kubelet[3480]: E0117 00:23:37.125879 3480 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\": not found" containerID="81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68" Jan 17 00:23:37.125922 containerd[2097]: time="2026-01-17T00:23:37.125771268Z" level=error msg="ContainerStatus for \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\": not found" Jan 17 00:23:37.125954 kubelet[3480]: I0117 00:23:37.125897 3480 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68"} err="failed to get container status \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\": rpc error: code = NotFound desc = an error occurred when try to find container \"81a7eda21ca2d839bfa4829f2d379a577a1d88cf38905e1900fd5ac33113ef68\": not found" Jan 17 00:23:37.649096 systemd[1]: var-lib-kubelet-pods-ac47a62d\x2d0ad5\x2d4599\x2dbdfb\x2d4a502444a7a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgdk8x.mount: Deactivated successfully. Jan 17 00:23:37.649619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780-rootfs.mount: Deactivated successfully. Jan 17 00:23:37.649786 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780-shm.mount: Deactivated successfully. Jan 17 00:23:37.649944 systemd[1]: var-lib-kubelet-pods-69f244c0\x2d5566\x2d4f41\x2da65d\x2d970d7e108157-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxx9n4.mount: Deactivated successfully. Jan 17 00:23:37.650093 systemd[1]: var-lib-kubelet-pods-69f244c0\x2d5566\x2d4f41\x2da65d\x2d970d7e108157-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:23:37.650791 systemd[1]: var-lib-kubelet-pods-69f244c0\x2d5566\x2d4f41\x2da65d\x2d970d7e108157-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:23:37.714770 kubelet[3480]: I0117 00:23:37.714730 3480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f244c0-5566-4f41-a65d-970d7e108157" path="/var/lib/kubelet/pods/69f244c0-5566-4f41-a65d-970d7e108157/volumes" Jan 17 00:23:37.715383 kubelet[3480]: I0117 00:23:37.715363 3480 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac47a62d-0ad5-4599-bdfb-4a502444a7a2" path="/var/lib/kubelet/pods/ac47a62d-0ad5-4599-bdfb-4a502444a7a2/volumes" Jan 17 00:23:38.621161 sshd[5049]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:38.625284 systemd[1]: sshd@23-172.31.17.137:22-4.153.228.146:57504.service: Deactivated successfully. Jan 17 00:23:38.628686 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:23:38.629537 systemd-logind[2077]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:23:38.631111 systemd-logind[2077]: Removed session 24. Jan 17 00:23:38.716854 systemd[1]: Started sshd@24-172.31.17.137:22-4.153.228.146:60950.service - OpenSSH per-connection server daemon (4.153.228.146:60950). Jan 17 00:23:39.230335 sshd[5215]: Accepted publickey for core from 4.153.228.146 port 60950 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:39.231847 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:39.236679 systemd-logind[2077]: New session 25 of user core. Jan 17 00:23:39.240842 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:23:39.338098 ntpd[2052]: Deleting interface #10 lxc_health, fe80::cc27:32ff:fe92:5acc%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Jan 17 00:23:39.338653 ntpd[2052]: 17 Jan 00:23:39 ntpd[2052]: Deleting interface #10 lxc_health, fe80::cc27:32ff:fe92:5acc%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Jan 17 00:23:39.829922 kubelet[3480]: E0117 00:23:39.829818 3480 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:23:40.287313 kubelet[3480]: I0117 00:23:40.286980 3480 memory_manager.go:355] "RemoveStaleState removing state" podUID="69f244c0-5566-4f41-a65d-970d7e108157" containerName="cilium-agent" Jan 17 00:23:40.287835 kubelet[3480]: I0117 00:23:40.287728 3480 memory_manager.go:355] "RemoveStaleState removing state" podUID="ac47a62d-0ad5-4599-bdfb-4a502444a7a2" containerName="cilium-operator" Jan 17 00:23:40.331807 sshd[5215]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:40.340377 systemd[1]: sshd@24-172.31.17.137:22-4.153.228.146:60950.service: Deactivated successfully. Jan 17 00:23:40.343381 kubelet[3480]: I0117 00:23:40.341617 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4dbd1307-ddda-4517-b76f-4521899a8157-cilium-config-path\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343381 kubelet[3480]: I0117 00:23:40.341658 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-host-proc-sys-net\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343381 kubelet[3480]: I0117 00:23:40.341684 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-cni-path\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343381 kubelet[3480]: I0117 00:23:40.341710 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-cilium-cgroup\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343381 kubelet[3480]: I0117 00:23:40.341735 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4dbd1307-ddda-4517-b76f-4521899a8157-hubble-tls\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343381 kubelet[3480]: I0117 00:23:40.341758 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwpsn\" (UniqueName: \"kubernetes.io/projected/4dbd1307-ddda-4517-b76f-4521899a8157-kube-api-access-qwpsn\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343825 kubelet[3480]: I0117 00:23:40.341782 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4dbd1307-ddda-4517-b76f-4521899a8157-clustermesh-secrets\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343825 kubelet[3480]: I0117 00:23:40.341807 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-xtables-lock\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343825 kubelet[3480]: I0117 00:23:40.341832 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-etc-cni-netd\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343825 kubelet[3480]: I0117 00:23:40.341854 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4dbd1307-ddda-4517-b76f-4521899a8157-cilium-ipsec-secrets\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343825 kubelet[3480]: I0117 00:23:40.341878 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-host-proc-sys-kernel\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.343825 kubelet[3480]: I0117 00:23:40.341903 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-bpf-maps\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.344104 kubelet[3480]: I0117 00:23:40.341924 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-lib-modules\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.344104 kubelet[3480]: I0117 00:23:40.341950 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-cilium-run\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.344104 kubelet[3480]: I0117 00:23:40.341974 3480 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4dbd1307-ddda-4517-b76f-4521899a8157-hostproc\") pod \"cilium-gxbh6\" (UID: \"4dbd1307-ddda-4517-b76f-4521899a8157\") " pod="kube-system/cilium-gxbh6" Jan 17 00:23:40.353833 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:23:40.367648 systemd-logind[2077]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:23:40.372869 systemd-logind[2077]: Removed session 25. Jan 17 00:23:40.423322 systemd[1]: Started sshd@25-172.31.17.137:22-4.153.228.146:60956.service - OpenSSH per-connection server daemon (4.153.228.146:60956). Jan 17 00:23:40.606071 containerd[2097]: time="2026-01-17T00:23:40.605939721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxbh6,Uid:4dbd1307-ddda-4517-b76f-4521899a8157,Namespace:kube-system,Attempt:0,}" Jan 17 00:23:40.627482 containerd[2097]: time="2026-01-17T00:23:40.627410431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:23:40.627712 containerd[2097]: time="2026-01-17T00:23:40.627619557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:23:40.627712 containerd[2097]: time="2026-01-17T00:23:40.627679601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:40.628439 containerd[2097]: time="2026-01-17T00:23:40.628393131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:23:40.673525 containerd[2097]: time="2026-01-17T00:23:40.673473427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxbh6,Uid:4dbd1307-ddda-4517-b76f-4521899a8157,Namespace:kube-system,Attempt:0,} returns sandbox id \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\"" Jan 17 00:23:40.676483 containerd[2097]: time="2026-01-17T00:23:40.676402949Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:23:40.690974 containerd[2097]: time="2026-01-17T00:23:40.690914823Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e7da0bb679ffcfd1a4b82b2ed10883ad8765ea676d44bacb7c86e0739bc4820e\"" Jan 17 00:23:40.691658 containerd[2097]: time="2026-01-17T00:23:40.691602533Z" level=info msg="StartContainer for \"e7da0bb679ffcfd1a4b82b2ed10883ad8765ea676d44bacb7c86e0739bc4820e\"" Jan 17 00:23:40.743869 containerd[2097]: time="2026-01-17T00:23:40.743816230Z" level=info msg="StartContainer for \"e7da0bb679ffcfd1a4b82b2ed10883ad8765ea676d44bacb7c86e0739bc4820e\" returns successfully" Jan 17 00:23:40.819289 containerd[2097]: time="2026-01-17T00:23:40.819229982Z" level=info msg="shim disconnected" id=e7da0bb679ffcfd1a4b82b2ed10883ad8765ea676d44bacb7c86e0739bc4820e namespace=k8s.io Jan 17 00:23:40.819289 containerd[2097]: time="2026-01-17T00:23:40.819279982Z" level=warning msg="cleaning up after shim disconnected" id=e7da0bb679ffcfd1a4b82b2ed10883ad8765ea676d44bacb7c86e0739bc4820e namespace=k8s.io Jan 17 00:23:40.819289 containerd[2097]: time="2026-01-17T00:23:40.819288564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:41.007838 sshd[5228]: Accepted publickey for core from 4.153.228.146 port 60956 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:41.010119 containerd[2097]: time="2026-01-17T00:23:41.010065433Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:23:41.012608 sshd[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:41.020068 systemd-logind[2077]: New session 26 of user core. Jan 17 00:23:41.024664 containerd[2097]: time="2026-01-17T00:23:41.024636457Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"79111c32b6ca0f2da33f5cd787d63eaf5ceb6fc90544242a7eb690d223f5f8af\"" Jan 17 00:23:41.025878 containerd[2097]: time="2026-01-17T00:23:41.025111207Z" level=info msg="StartContainer for \"79111c32b6ca0f2da33f5cd787d63eaf5ceb6fc90544242a7eb690d223f5f8af\"" Jan 17 00:23:41.030271 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:23:41.086162 containerd[2097]: time="2026-01-17T00:23:41.086130550Z" level=info msg="StartContainer for \"79111c32b6ca0f2da33f5cd787d63eaf5ceb6fc90544242a7eb690d223f5f8af\" returns successfully" Jan 17 00:23:41.114240 containerd[2097]: time="2026-01-17T00:23:41.114165734Z" level=info msg="shim disconnected" id=79111c32b6ca0f2da33f5cd787d63eaf5ceb6fc90544242a7eb690d223f5f8af namespace=k8s.io Jan 17 00:23:41.114240 containerd[2097]: time="2026-01-17T00:23:41.114224611Z" level=warning msg="cleaning up after shim disconnected" id=79111c32b6ca0f2da33f5cd787d63eaf5ceb6fc90544242a7eb690d223f5f8af namespace=k8s.io Jan 17 00:23:41.114240 containerd[2097]: time="2026-01-17T00:23:41.114237925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:41.378533 sshd[5228]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:41.383081 systemd[1]: sshd@25-172.31.17.137:22-4.153.228.146:60956.service: Deactivated successfully. Jan 17 00:23:41.383742 systemd-logind[2077]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:23:41.386328 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:23:41.387704 systemd-logind[2077]: Removed session 26. Jan 17 00:23:41.466918 systemd[1]: Started sshd@26-172.31.17.137:22-4.153.228.146:60958.service - OpenSSH per-connection server daemon (4.153.228.146:60958). Jan 17 00:23:41.983834 sshd[5405]: Accepted publickey for core from 4.153.228.146 port 60958 ssh2: RSA SHA256:sbILTD9G5iELn3Zwr53HzB3sU6rscuYx+TXC00D8O3s Jan 17 00:23:41.985233 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:23:41.989841 systemd-logind[2077]: New session 27 of user core. Jan 17 00:23:41.992847 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:23:42.013067 containerd[2097]: time="2026-01-17T00:23:42.013016615Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:23:42.035851 containerd[2097]: time="2026-01-17T00:23:42.035809910Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55102b2df833c4618e8963b3de2e2d344a9cdf8dcb10c98aa178db865b00da8a\"" Jan 17 00:23:42.036790 containerd[2097]: time="2026-01-17T00:23:42.036764822Z" level=info msg="StartContainer for \"55102b2df833c4618e8963b3de2e2d344a9cdf8dcb10c98aa178db865b00da8a\"" Jan 17 00:23:42.079053 systemd[1]: run-containerd-runc-k8s.io-55102b2df833c4618e8963b3de2e2d344a9cdf8dcb10c98aa178db865b00da8a-runc.Jnqo0k.mount: Deactivated successfully. Jan 17 00:23:42.110903 containerd[2097]: time="2026-01-17T00:23:42.110868478Z" level=info msg="StartContainer for \"55102b2df833c4618e8963b3de2e2d344a9cdf8dcb10c98aa178db865b00da8a\" returns successfully" Jan 17 00:23:42.230137 containerd[2097]: time="2026-01-17T00:23:42.230058081Z" level=info msg="shim disconnected" id=55102b2df833c4618e8963b3de2e2d344a9cdf8dcb10c98aa178db865b00da8a namespace=k8s.io Jan 17 00:23:42.230137 containerd[2097]: time="2026-01-17T00:23:42.230133482Z" level=warning msg="cleaning up after shim disconnected" id=55102b2df833c4618e8963b3de2e2d344a9cdf8dcb10c98aa178db865b00da8a namespace=k8s.io Jan 17 00:23:42.230137 containerd[2097]: time="2026-01-17T00:23:42.230145963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:42.237736 kubelet[3480]: I0117 00:23:42.237366 3480 setters.go:602] "Node became not ready" node="ip-172-31-17-137" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:23:42Z","lastTransitionTime":"2026-01-17T00:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:23:42.478708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55102b2df833c4618e8963b3de2e2d344a9cdf8dcb10c98aa178db865b00da8a-rootfs.mount: Deactivated successfully. Jan 17 00:23:43.017949 containerd[2097]: time="2026-01-17T00:23:43.017887973Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:23:43.036608 containerd[2097]: time="2026-01-17T00:23:43.032356735Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7d342792509d264db73ed8d81071eb38fe274438dd0e566b80d65ea8cfbe8851\"" Jan 17 00:23:43.036608 containerd[2097]: time="2026-01-17T00:23:43.033221663Z" level=info msg="StartContainer for \"7d342792509d264db73ed8d81071eb38fe274438dd0e566b80d65ea8cfbe8851\"" Jan 17 00:23:43.046440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533838299.mount: Deactivated successfully. Jan 17 00:23:43.108025 containerd[2097]: time="2026-01-17T00:23:43.107985430Z" level=info msg="StartContainer for \"7d342792509d264db73ed8d81071eb38fe274438dd0e566b80d65ea8cfbe8851\" returns successfully" Jan 17 00:23:43.131205 containerd[2097]: time="2026-01-17T00:23:43.131153788Z" level=info msg="shim disconnected" id=7d342792509d264db73ed8d81071eb38fe274438dd0e566b80d65ea8cfbe8851 namespace=k8s.io Jan 17 00:23:43.131205 containerd[2097]: time="2026-01-17T00:23:43.131201708Z" level=warning msg="cleaning up after shim disconnected" id=7d342792509d264db73ed8d81071eb38fe274438dd0e566b80d65ea8cfbe8851 namespace=k8s.io Jan 17 00:23:43.131205 containerd[2097]: time="2026-01-17T00:23:43.131210130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:23:43.478856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d342792509d264db73ed8d81071eb38fe274438dd0e566b80d65ea8cfbe8851-rootfs.mount: Deactivated successfully. Jan 17 00:23:44.021240 containerd[2097]: time="2026-01-17T00:23:44.021192622Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:23:44.035223 containerd[2097]: time="2026-01-17T00:23:44.035175362Z" level=info msg="CreateContainer within sandbox \"2af77f3e32ac5edd8f3198e61be202c75ac9e588945d44461aff228ed465540c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"03863b3bfc706d2274c285f12fd53443cb3bae951432c6132febd9743c72aa26\"" Jan 17 00:23:44.039383 containerd[2097]: time="2026-01-17T00:23:44.036324955Z" level=info msg="StartContainer for \"03863b3bfc706d2274c285f12fd53443cb3bae951432c6132febd9743c72aa26\"" Jan 17 00:23:44.038024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857359368.mount: Deactivated successfully. Jan 17 00:23:44.096278 containerd[2097]: time="2026-01-17T00:23:44.095638077Z" level=info msg="StartContainer for \"03863b3bfc706d2274c285f12fd53443cb3bae951432c6132febd9743c72aa26\" returns successfully" Jan 17 00:23:44.611589 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:23:46.959621 systemd[1]: run-containerd-runc-k8s.io-03863b3bfc706d2274c285f12fd53443cb3bae951432c6132febd9743c72aa26-runc.DNncKX.mount: Deactivated successfully. Jan 17 00:23:47.698988 systemd-networkd[1652]: lxc_health: Link UP Jan 17 00:23:47.707441 systemd-networkd[1652]: lxc_health: Gained carrier Jan 17 00:23:47.717468 (udev-worker)[6091]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:23:48.638590 kubelet[3480]: I0117 00:23:48.636089 3480 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gxbh6" podStartSLOduration=8.636064461 podStartE2EDuration="8.636064461s" podCreationTimestamp="2026-01-17 00:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:23:45.04069368 +0000 UTC m=+105.448971276" watchObservedRunningTime="2026-01-17 00:23:48.636064461 +0000 UTC m=+109.044342058" Jan 17 00:23:49.578046 systemd-networkd[1652]: lxc_health: Gained IPv6LL Jan 17 00:23:52.516169 kubelet[3480]: E0117 00:23:52.516026 3480 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45394->127.0.0.1:45833: write tcp 127.0.0.1:45394->127.0.0.1:45833: write: broken pipe Jan 17 00:23:54.338152 ntpd[2052]: Listen normally on 13 lxc_health [fe80::6cfc:64ff:fe88:740c%14]:123 Jan 17 00:23:54.338820 ntpd[2052]: 17 Jan 00:23:54 ntpd[2052]: Listen normally on 13 lxc_health [fe80::6cfc:64ff:fe88:740c%14]:123 Jan 17 00:23:54.615857 kubelet[3480]: E0117 00:23:54.615037 3480 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45408->127.0.0.1:45833: write tcp 127.0.0.1:45408->127.0.0.1:45833: write: broken pipe Jan 17 00:23:56.907523 sshd[5405]: pam_unix(sshd:session): session closed for user core Jan 17 00:23:56.910798 systemd[1]: sshd@26-172.31.17.137:22-4.153.228.146:60958.service: Deactivated successfully. Jan 17 00:23:56.914616 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:23:56.916249 systemd-logind[2077]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:23:56.917276 systemd-logind[2077]: Removed session 27. Jan 17 00:23:59.735348 containerd[2097]: time="2026-01-17T00:23:59.735308257Z" level=info msg="StopPodSandbox for \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\"" Jan 17 00:23:59.735788 containerd[2097]: time="2026-01-17T00:23:59.735395998Z" level=info msg="TearDown network for sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" successfully" Jan 17 00:23:59.735788 containerd[2097]: time="2026-01-17T00:23:59.735406971Z" level=info msg="StopPodSandbox for \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" returns successfully" Jan 17 00:23:59.735788 containerd[2097]: time="2026-01-17T00:23:59.735757605Z" level=info msg="RemovePodSandbox for \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\"" Jan 17 00:23:59.737959 containerd[2097]: time="2026-01-17T00:23:59.737928740Z" level=info msg="Forcibly stopping sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\"" Jan 17 00:23:59.738083 containerd[2097]: time="2026-01-17T00:23:59.738000581Z" level=info msg="TearDown network for sandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" successfully" Jan 17 00:23:59.740928 containerd[2097]: time="2026-01-17T00:23:59.740886777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:59.741034 containerd[2097]: time="2026-01-17T00:23:59.740940197Z" level=info msg="RemovePodSandbox \"7850dd0d0294d3456cb8f0b86727cd0bef3a65d56a409bb628808f5b777b7780\" returns successfully" Jan 17 00:23:59.741670 containerd[2097]: time="2026-01-17T00:23:59.741378356Z" level=info msg="StopPodSandbox for \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\"" Jan 17 00:23:59.741670 containerd[2097]: time="2026-01-17T00:23:59.741449133Z" level=info msg="TearDown network for sandbox \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\" successfully" Jan 17 00:23:59.741670 containerd[2097]: time="2026-01-17T00:23:59.741459729Z" level=info msg="StopPodSandbox for \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\" returns successfully" Jan 17 00:23:59.741822 containerd[2097]: time="2026-01-17T00:23:59.741776059Z" level=info msg="RemovePodSandbox for \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\"" Jan 17 00:23:59.741822 containerd[2097]: time="2026-01-17T00:23:59.741798397Z" level=info msg="Forcibly stopping sandbox \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\"" Jan 17 00:23:59.742608 containerd[2097]: time="2026-01-17T00:23:59.741852703Z" level=info msg="TearDown network for sandbox \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\" successfully" Jan 17 00:23:59.744785 containerd[2097]: time="2026-01-17T00:23:59.744745214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:23:59.744903 containerd[2097]: time="2026-01-17T00:23:59.744809512Z" level=info msg="RemovePodSandbox \"3967b3b8d097c07ea0e65b258d893cfbda3f55b8477cc862068a3581ec24c951\" returns successfully" Jan 17 00:24:11.978760 kubelet[3480]: E0117 00:24:11.978714 3480 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": context deadline exceeded" Jan 17 00:24:12.172819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2009fb39d125e262d7b5b26f37c8048e981f146e66fac7f48dba880af1c6c466-rootfs.mount: Deactivated successfully. Jan 17 00:24:12.184356 containerd[2097]: time="2026-01-17T00:24:12.184294098Z" level=info msg="shim disconnected" id=2009fb39d125e262d7b5b26f37c8048e981f146e66fac7f48dba880af1c6c466 namespace=k8s.io Jan 17 00:24:12.184356 containerd[2097]: time="2026-01-17T00:24:12.184350033Z" level=warning msg="cleaning up after shim disconnected" id=2009fb39d125e262d7b5b26f37c8048e981f146e66fac7f48dba880af1c6c466 namespace=k8s.io Jan 17 00:24:12.184356 containerd[2097]: time="2026-01-17T00:24:12.184358793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:13.088182 kubelet[3480]: I0117 00:24:13.088137 3480 scope.go:117] "RemoveContainer" containerID="2009fb39d125e262d7b5b26f37c8048e981f146e66fac7f48dba880af1c6c466" Jan 17 00:24:13.093011 containerd[2097]: time="2026-01-17T00:24:13.092971701Z" level=info msg="CreateContainer within sandbox \"89de8d603945b67e748db6a5fc0951b8408b3377e0e5ee981660fc75a4863437\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:24:13.108589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670509811.mount: Deactivated successfully. Jan 17 00:24:13.109104 containerd[2097]: time="2026-01-17T00:24:13.109070626Z" level=info msg="CreateContainer within sandbox \"89de8d603945b67e748db6a5fc0951b8408b3377e0e5ee981660fc75a4863437\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1ab7ba3396d4c0274410682309c541e3824087b0a4c07991e3d07efb7057a4ad\"" Jan 17 00:24:13.110135 containerd[2097]: time="2026-01-17T00:24:13.110106249Z" level=info msg="StartContainer for \"1ab7ba3396d4c0274410682309c541e3824087b0a4c07991e3d07efb7057a4ad\"" Jan 17 00:24:13.172627 systemd[1]: run-containerd-runc-k8s.io-1ab7ba3396d4c0274410682309c541e3824087b0a4c07991e3d07efb7057a4ad-runc.60TPne.mount: Deactivated successfully. Jan 17 00:24:13.194775 containerd[2097]: time="2026-01-17T00:24:13.194734299Z" level=info msg="StartContainer for \"1ab7ba3396d4c0274410682309c541e3824087b0a4c07991e3d07efb7057a4ad\" returns successfully" Jan 17 00:24:16.339060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10173f6eec0b0f88f42962a2aed2e1e0312bae10bffc6746b0093a7b7bfe9d33-rootfs.mount: Deactivated successfully. Jan 17 00:24:16.348199 containerd[2097]: time="2026-01-17T00:24:16.348143343Z" level=info msg="shim disconnected" id=10173f6eec0b0f88f42962a2aed2e1e0312bae10bffc6746b0093a7b7bfe9d33 namespace=k8s.io Jan 17 00:24:16.348199 containerd[2097]: time="2026-01-17T00:24:16.348190525Z" level=warning msg="cleaning up after shim disconnected" id=10173f6eec0b0f88f42962a2aed2e1e0312bae10bffc6746b0093a7b7bfe9d33 namespace=k8s.io Jan 17 00:24:16.348199 containerd[2097]: time="2026-01-17T00:24:16.348199532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:24:17.099331 kubelet[3480]: I0117 00:24:17.099294 3480 scope.go:117] "RemoveContainer" containerID="10173f6eec0b0f88f42962a2aed2e1e0312bae10bffc6746b0093a7b7bfe9d33" Jan 17 00:24:17.101389 containerd[2097]: time="2026-01-17T00:24:17.101351052Z" level=info msg="CreateContainer within sandbox \"13390b1dd422724cacf5beb7f2646eafd7b368a90923d83482b23da774003e76\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:24:17.117624 containerd[2097]: time="2026-01-17T00:24:17.117583919Z" level=info msg="CreateContainer within sandbox \"13390b1dd422724cacf5beb7f2646eafd7b368a90923d83482b23da774003e76\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"10359feaaa6806009ea615e645736b523ec57ed61dbeb5e9b4916a5d6aeffcab\"" Jan 17 00:24:17.118178 containerd[2097]: time="2026-01-17T00:24:17.118144976Z" level=info msg="StartContainer for \"10359feaaa6806009ea615e645736b523ec57ed61dbeb5e9b4916a5d6aeffcab\"" Jan 17 00:24:17.198183 containerd[2097]: time="2026-01-17T00:24:17.198144680Z" level=info msg="StartContainer for \"10359feaaa6806009ea615e645736b523ec57ed61dbeb5e9b4916a5d6aeffcab\" returns successfully" Jan 17 00:24:21.979860 kubelet[3480]: E0117 00:24:21.979810 3480 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 17 00:24:31.980701 kubelet[3480]: E0117 00:24:31.980643 3480 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-137?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"