Jan 23 00:05:09.148559 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 00:05:09.148602 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 22 22:21:53 -00 2026 Jan 23 00:05:09.148626 kernel: KASLR disabled due to lack of seed Jan 23 00:05:09.148642 kernel: efi: EFI v2.7 by EDK II Jan 23 00:05:09.148658 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 23 00:05:09.148674 kernel: secureboot: Secure boot disabled Jan 23 00:05:09.148691 kernel: ACPI: Early table checksum verification disabled Jan 23 00:05:09.148707 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 00:05:09.148722 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 00:05:09.148738 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 00:05:09.148753 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 00:05:09.148773 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 00:05:09.148788 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 00:05:09.148804 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 00:05:09.148823 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 00:05:09.148840 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 00:05:09.148861 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 00:05:09.148877 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 00:05:09.148894 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 00:05:09.148910 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 00:05:09.148926 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 00:05:09.148942 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 00:05:09.148958 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 00:05:09.148975 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 00:05:09.148991 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 23 00:05:09.149007 kernel: Zone ranges: Jan 23 00:05:09.149024 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 00:05:09.149044 kernel: DMA32 empty Jan 23 00:05:09.149060 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 00:05:09.149076 kernel: Device empty Jan 23 00:05:09.149092 kernel: Movable zone start for each node Jan 23 00:05:09.149146 kernel: Early memory node ranges Jan 23 00:05:09.149166 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 00:05:09.149183 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 00:05:09.149199 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 00:05:09.149216 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 00:05:09.149233 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 00:05:09.149249 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 00:05:09.149265 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 00:05:09.149288 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 00:05:09.149311 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 00:05:09.149329 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 00:05:09.149346 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 00:05:09.149365 kernel: psci: probing for conduit method from ACPI. Jan 23 00:05:09.149386 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 00:05:09.149403 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 00:05:09.149420 kernel: psci: Trusted OS migration not required Jan 23 00:05:09.149436 kernel: psci: SMC Calling Convention v1.1 Jan 23 00:05:09.149454 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 00:05:09.149471 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 00:05:09.149488 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 00:05:09.149505 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 00:05:09.149522 kernel: Detected PIPT I-cache on CPU0 Jan 23 00:05:09.149539 kernel: CPU features: detected: GIC system register CPU interface Jan 23 00:05:09.149556 kernel: CPU features: detected: Spectre-v2 Jan 23 00:05:09.149576 kernel: CPU features: detected: Spectre-v3a Jan 23 00:05:09.149594 kernel: CPU features: detected: Spectre-BHB Jan 23 00:05:09.149611 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 00:05:09.149627 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 00:05:09.149645 kernel: alternatives: applying boot alternatives Jan 23 00:05:09.149664 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:05:09.149682 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:05:09.149699 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:05:09.149716 kernel: Fallback order for Node 0: 0 Jan 23 00:05:09.149733 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 00:05:09.149750 kernel: Policy zone: Normal Jan 23 00:05:09.149771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:05:09.149788 kernel: software IO TLB: area num 2. Jan 23 00:05:09.149805 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Jan 23 00:05:09.149823 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:05:09.149840 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:05:09.149858 kernel: rcu: RCU event tracing is enabled. Jan 23 00:05:09.149875 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:05:09.149893 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:05:09.149911 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:05:09.149928 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:05:09.149945 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:05:09.149966 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:05:09.149983 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:05:09.150000 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 00:05:09.150017 kernel: GICv3: 96 SPIs implemented Jan 23 00:05:09.150034 kernel: GICv3: 0 Extended SPIs implemented Jan 23 00:05:09.150051 kernel: Root IRQ handler: gic_handle_irq Jan 23 00:05:09.150067 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 00:05:09.150084 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 00:05:09.150122 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 00:05:09.150145 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 00:05:09.150163 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 00:05:09.150181 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 00:05:09.150204 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 00:05:09.150221 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 00:05:09.150238 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 00:05:09.150255 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:05:09.150272 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 00:05:09.150289 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 00:05:09.150307 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 00:05:09.150324 kernel: Console: colour dummy device 80x25 Jan 23 00:05:09.150341 kernel: printk: legacy console [tty1] enabled Jan 23 00:05:09.150358 kernel: ACPI: Core revision 20240827 Jan 23 00:05:09.150376 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 00:05:09.150397 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:05:09.150415 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:05:09.150432 kernel: landlock: Up and running. Jan 23 00:05:09.150449 kernel: SELinux: Initializing. Jan 23 00:05:09.150466 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:05:09.150483 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:05:09.150501 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:05:09.150518 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:05:09.150539 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:05:09.150556 kernel: Remapping and enabling EFI services. Jan 23 00:05:09.150573 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:05:09.150590 kernel: Detected PIPT I-cache on CPU1 Jan 23 00:05:09.150608 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 00:05:09.150625 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 00:05:09.150642 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 00:05:09.150659 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:05:09.150698 kernel: SMP: Total of 2 processors activated. Jan 23 00:05:09.150721 kernel: CPU: All CPU(s) started at EL1 Jan 23 00:05:09.150750 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 00:05:09.150768 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 00:05:09.150790 kernel: CPU features: detected: CRC32 instructions Jan 23 00:05:09.150808 kernel: alternatives: applying system-wide alternatives Jan 23 00:05:09.150828 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Jan 23 00:05:09.150846 kernel: devtmpfs: initialized Jan 23 00:05:09.150865 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:05:09.150887 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:05:09.150905 kernel: 16880 pages in range for non-PLT usage Jan 23 00:05:09.150923 kernel: 508400 pages in range for PLT usage Jan 23 00:05:09.150941 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:05:09.150959 kernel: SMBIOS 3.0.0 present. Jan 23 00:05:09.150977 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 00:05:09.150995 kernel: DMI: Memory slots populated: 0/0 Jan 23 00:05:09.151013 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:05:09.151031 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 00:05:09.151054 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 00:05:09.151072 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 00:05:09.151090 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:05:09.151144 kernel: audit: type=2000 audit(0.259:1): state=initialized audit_enabled=0 res=1 Jan 23 00:05:09.151164 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:05:09.151183 kernel: cpuidle: using governor menu Jan 23 00:05:09.151201 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 00:05:09.151220 kernel: ASID allocator initialised with 65536 entries Jan 23 00:05:09.151238 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:05:09.151261 kernel: Serial: AMBA PL011 UART driver Jan 23 00:05:09.151280 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:05:09.151298 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:05:09.151316 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 00:05:09.151333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 00:05:09.151352 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:05:09.151370 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:05:09.151388 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 00:05:09.151406 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 00:05:09.151428 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:05:09.151446 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:05:09.151464 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:05:09.151482 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:05:09.151500 kernel: ACPI: Interpreter enabled Jan 23 00:05:09.151518 kernel: ACPI: Using GIC for interrupt routing Jan 23 00:05:09.151536 kernel: ACPI: MCFG table detected, 1 entries Jan 23 00:05:09.151553 kernel: ACPI: CPU0 has been hot-added Jan 23 00:05:09.151571 kernel: ACPI: CPU1 has been hot-added Jan 23 00:05:09.151593 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 00:05:09.151886 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:05:09.152089 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 00:05:09.152323 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 00:05:09.152513 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 00:05:09.152699 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 00:05:09.152724 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 00:05:09.152752 kernel: acpiphp: Slot [1] registered Jan 23 00:05:09.152771 kernel: acpiphp: Slot [2] registered Jan 23 00:05:09.152789 kernel: acpiphp: Slot [3] registered Jan 23 00:05:09.152808 kernel: acpiphp: Slot [4] registered Jan 23 00:05:09.152826 kernel: acpiphp: Slot [5] registered Jan 23 00:05:09.152844 kernel: acpiphp: Slot [6] registered Jan 23 00:05:09.152862 kernel: acpiphp: Slot [7] registered Jan 23 00:05:09.152880 kernel: acpiphp: Slot [8] registered Jan 23 00:05:09.152898 kernel: acpiphp: Slot [9] registered Jan 23 00:05:09.152916 kernel: acpiphp: Slot [10] registered Jan 23 00:05:09.152938 kernel: acpiphp: Slot [11] registered Jan 23 00:05:09.152957 kernel: acpiphp: Slot [12] registered Jan 23 00:05:09.152974 kernel: acpiphp: Slot [13] registered Jan 23 00:05:09.152992 kernel: acpiphp: Slot [14] registered Jan 23 00:05:09.153010 kernel: acpiphp: Slot [15] registered Jan 23 00:05:09.153028 kernel: acpiphp: Slot [16] registered Jan 23 00:05:09.153046 kernel: acpiphp: Slot [17] registered Jan 23 00:05:09.153064 kernel: acpiphp: Slot [18] registered Jan 23 00:05:09.153082 kernel: acpiphp: Slot [19] registered Jan 23 00:05:09.155147 kernel: acpiphp: Slot [20] registered Jan 23 00:05:09.155192 kernel: acpiphp: Slot [21] registered Jan 23 00:05:09.155212 kernel: acpiphp: Slot [22] registered Jan 23 00:05:09.155231 kernel: acpiphp: Slot [23] registered Jan 23 00:05:09.155250 kernel: acpiphp: Slot [24] registered Jan 23 00:05:09.155269 kernel: acpiphp: Slot [25] registered Jan 23 00:05:09.155288 kernel: acpiphp: Slot [26] registered Jan 23 00:05:09.155307 kernel: acpiphp: Slot [27] registered Jan 23 00:05:09.155326 kernel: acpiphp: Slot [28] registered Jan 23 00:05:09.155344 kernel: acpiphp: Slot [29] registered Jan 23 00:05:09.155390 kernel: acpiphp: Slot [30] registered Jan 23 00:05:09.155413 kernel: acpiphp: Slot [31] registered Jan 23 00:05:09.155432 kernel: PCI host bridge to bus 0000:00 Jan 23 00:05:09.155727 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 00:05:09.155918 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 00:05:09.156086 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 00:05:09.156304 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 00:05:09.156537 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:05:09.156748 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 00:05:09.156943 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 00:05:09.157172 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 00:05:09.157374 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 00:05:09.157568 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 00:05:09.157791 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 00:05:09.157989 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 00:05:09.158709 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 00:05:09.158937 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 00:05:09.159250 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 00:05:09.159451 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 00:05:09.159625 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 00:05:09.159808 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 00:05:09.159834 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 00:05:09.159854 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 00:05:09.159874 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 00:05:09.159892 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 00:05:09.159910 kernel: iommu: Default domain type: Translated Jan 23 00:05:09.159928 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 00:05:09.159946 kernel: efivars: Registered efivars operations Jan 23 00:05:09.159964 kernel: vgaarb: loaded Jan 23 00:05:09.159988 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 00:05:09.160006 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:05:09.160024 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:05:09.160042 kernel: pnp: PnP ACPI init Jan 23 00:05:09.161135 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 00:05:09.161186 kernel: pnp: PnP ACPI: found 1 devices Jan 23 00:05:09.161206 kernel: NET: Registered PF_INET protocol family Jan 23 00:05:09.161225 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:05:09.161253 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:05:09.161272 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:05:09.161291 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:05:09.161310 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:05:09.161329 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:05:09.161347 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:05:09.161367 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:05:09.161385 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:05:09.161403 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:05:09.161426 kernel: kvm [1]: HYP mode not available Jan 23 00:05:09.161445 kernel: Initialise system trusted keyrings Jan 23 00:05:09.161464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:05:09.161482 kernel: Key type asymmetric registered Jan 23 00:05:09.161500 kernel: Asymmetric key parser 'x509' registered Jan 23 00:05:09.161519 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 00:05:09.161538 kernel: io scheduler mq-deadline registered Jan 23 00:05:09.161557 kernel: io scheduler kyber registered Jan 23 00:05:09.161576 kernel: io scheduler bfq registered Jan 23 00:05:09.161842 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 00:05:09.161873 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 00:05:09.161892 kernel: ACPI: button: Power Button [PWRB] Jan 23 00:05:09.161912 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 00:05:09.161931 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 00:05:09.161950 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:05:09.161970 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 00:05:09.162216 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 00:05:09.162254 kernel: printk: legacy console [ttyS0] disabled Jan 23 00:05:09.162274 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 00:05:09.162294 kernel: printk: legacy console [ttyS0] enabled Jan 23 00:05:09.162312 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 00:05:09.162331 kernel: thunder_xcv, ver 1.0 Jan 23 00:05:09.162350 kernel: thunder_bgx, ver 1.0 Jan 23 00:05:09.162369 kernel: nicpf, ver 1.0 Jan 23 00:05:09.162387 kernel: nicvf, ver 1.0 Jan 23 00:05:09.162616 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 00:05:09.162861 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T00:05:08 UTC (1769126708) Jan 23 00:05:09.162891 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 00:05:09.162911 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 00:05:09.162930 kernel: watchdog: NMI not fully supported Jan 23 00:05:09.162949 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:05:09.162968 kernel: watchdog: Hard watchdog permanently disabled Jan 23 00:05:09.162989 kernel: Segment Routing with IPv6 Jan 23 00:05:09.163008 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:05:09.163027 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:05:09.163055 kernel: Key type dns_resolver registered Jan 23 00:05:09.163074 kernel: registered taskstats version 1 Jan 23 00:05:09.163093 kernel: Loading compiled-in X.509 certificates Jan 23 00:05:09.163186 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 380753d9165686712e58c1d21e00c0268e70f18f' Jan 23 00:05:09.163208 kernel: Demotion targets for Node 0: null Jan 23 00:05:09.163226 kernel: Key type .fscrypt registered Jan 23 00:05:09.163244 kernel: Key type fscrypt-provisioning registered Jan 23 00:05:09.163263 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:05:09.163281 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:05:09.163308 kernel: ima: No architecture policies found Jan 23 00:05:09.163327 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 00:05:09.163346 kernel: clk: Disabling unused clocks Jan 23 00:05:09.163364 kernel: PM: genpd: Disabling unused power domains Jan 23 00:05:09.163382 kernel: Warning: unable to open an initial console. Jan 23 00:05:09.163401 kernel: Freeing unused kernel memory: 39552K Jan 23 00:05:09.163419 kernel: Run /init as init process Jan 23 00:05:09.163437 kernel: with arguments: Jan 23 00:05:09.163455 kernel: /init Jan 23 00:05:09.163478 kernel: with environment: Jan 23 00:05:09.163496 kernel: HOME=/ Jan 23 00:05:09.163515 kernel: TERM=linux Jan 23 00:05:09.163536 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:05:09.163561 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:05:09.163584 systemd[1]: Detected virtualization amazon. Jan 23 00:05:09.163604 systemd[1]: Detected architecture arm64. Jan 23 00:05:09.163628 systemd[1]: Running in initrd. Jan 23 00:05:09.163648 systemd[1]: No hostname configured, using default hostname. Jan 23 00:05:09.163669 systemd[1]: Hostname set to . Jan 23 00:05:09.163688 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:05:09.163707 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:05:09.163727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:05:09.163746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:05:09.163768 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:05:09.163792 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:05:09.163813 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:05:09.163834 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:05:09.163855 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:05:09.163875 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:05:09.163895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:05:09.163914 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:05:09.163938 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:05:09.163958 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:05:09.163977 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:05:09.163997 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:05:09.164016 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:05:09.164036 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:05:09.164056 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:05:09.164075 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:05:09.164096 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:05:09.164177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:05:09.164198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:05:09.164218 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:05:09.164237 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:05:09.164257 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:05:09.164276 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:05:09.164297 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:05:09.164317 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:05:09.164341 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:05:09.164361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:05:09.164381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:05:09.164401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:05:09.164421 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:05:09.164446 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:05:09.164466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:05:09.164543 systemd-journald[258]: Collecting audit messages is disabled. Jan 23 00:05:09.164586 systemd-journald[258]: Journal started Jan 23 00:05:09.164628 systemd-journald[258]: Runtime Journal (/run/log/journal/ec29e5bebaf044cd1652ced04e95f37f) is 8M, max 75.3M, 67.3M free. Jan 23 00:05:09.167620 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:05:09.127768 systemd-modules-load[260]: Inserted module 'overlay' Jan 23 00:05:09.170620 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:05:09.184200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:05:09.197300 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:05:09.212834 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:05:09.214835 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:05:09.228284 systemd-modules-load[260]: Inserted module 'br_netfilter' Jan 23 00:05:09.231466 kernel: Bridge firewalling registered Jan 23 00:05:09.236293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:05:09.245215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:05:09.252194 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:05:09.257316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:05:09.280857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:05:09.299475 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:05:09.308460 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:05:09.326517 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:05:09.340201 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:05:09.355244 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:05:09.388065 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:05:09.473984 systemd-resolved[302]: Positive Trust Anchors: Jan 23 00:05:09.474041 systemd-resolved[302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:05:09.474197 systemd-resolved[302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:05:09.575149 kernel: SCSI subsystem initialized Jan 23 00:05:09.583150 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:05:09.597164 kernel: iscsi: registered transport (tcp) Jan 23 00:05:09.620149 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:05:09.620225 kernel: QLogic iSCSI HBA Driver Jan 23 00:05:09.660354 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:05:09.693239 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:05:09.698586 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:05:09.752163 kernel: random: crng init done Jan 23 00:05:09.752573 systemd-resolved[302]: Defaulting to hostname 'linux'. Jan 23 00:05:09.758398 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:05:09.762328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:05:09.826235 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:05:09.833871 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:05:09.941170 kernel: raid6: neonx8 gen() 6471 MB/s Jan 23 00:05:09.958178 kernel: raid6: neonx4 gen() 6243 MB/s Jan 23 00:05:09.976175 kernel: raid6: neonx2 gen() 4990 MB/s Jan 23 00:05:09.993164 kernel: raid6: neonx1 gen() 3760 MB/s Jan 23 00:05:10.010159 kernel: raid6: int64x8 gen() 3341 MB/s Jan 23 00:05:10.027168 kernel: raid6: int64x4 gen() 3539 MB/s Jan 23 00:05:10.044164 kernel: raid6: int64x2 gen() 3578 MB/s Jan 23 00:05:10.062319 kernel: raid6: int64x1 gen() 2573 MB/s Jan 23 00:05:10.062391 kernel: raid6: using algorithm neonx8 gen() 6471 MB/s Jan 23 00:05:10.081812 kernel: raid6: .... xor() 4081 MB/s, rmw enabled Jan 23 00:05:10.081897 kernel: raid6: using neon recovery algorithm Jan 23 00:05:10.092472 kernel: xor: measuring software checksum speed Jan 23 00:05:10.092548 kernel: 8regs : 13009 MB/sec Jan 23 00:05:10.093718 kernel: 32regs : 13016 MB/sec Jan 23 00:05:10.095500 kernel: arm64_neon : 9157 MB/sec Jan 23 00:05:10.095550 kernel: xor: using function: 32regs (13016 MB/sec) Jan 23 00:05:10.210153 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:05:10.226235 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:05:10.236838 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:05:10.295497 systemd-udevd[508]: Using default interface naming scheme 'v255'. Jan 23 00:05:10.306389 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:05:10.327760 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:05:10.381524 dracut-pre-trigger[519]: rd.md=0: removing MD RAID activation Jan 23 00:05:10.431847 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:05:10.441495 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:05:10.575874 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:05:10.593528 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:05:10.783459 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 00:05:10.783550 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 00:05:10.792191 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 00:05:10.794316 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 00:05:10.794703 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 00:05:10.797692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:05:10.803540 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 00:05:10.801296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:05:10.812986 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:05:10.818138 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 00:05:10.818454 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:cd:58:72:2b:bf Jan 23 00:05:10.828249 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:05:10.828320 kernel: GPT:9289727 != 33554431 Jan 23 00:05:10.829982 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:05:10.828770 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:05:10.842644 kernel: GPT:9289727 != 33554431 Jan 23 00:05:10.842713 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:05:10.842741 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 00:05:10.839157 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:05:10.853072 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:05:10.895159 kernel: nvme nvme0: using unchecked data buffer Jan 23 00:05:10.910728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:05:11.052527 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 00:05:11.128663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 00:05:11.153181 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:05:11.184196 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 00:05:11.210964 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 00:05:11.218983 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 00:05:11.228700 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:05:11.232961 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:05:11.237147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:05:11.242866 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:05:11.256489 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:05:11.313204 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 00:05:11.313288 disk-uuid[688]: Primary Header is updated. Jan 23 00:05:11.313288 disk-uuid[688]: Secondary Entries is updated. Jan 23 00:05:11.313288 disk-uuid[688]: Secondary Header is updated. Jan 23 00:05:11.334268 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:05:11.364184 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 00:05:12.366225 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 00:05:12.370063 disk-uuid[693]: The operation has completed successfully. Jan 23 00:05:12.553214 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:05:12.554228 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:05:12.668620 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:05:12.695484 sh[955]: Success Jan 23 00:05:12.728666 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:05:12.728745 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:05:12.730859 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:05:12.748146 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 00:05:12.861963 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:05:12.872015 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:05:12.882869 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:05:12.921152 kernel: BTRFS: device fsid 97a43946-ed04-45c1-a355-c0350e8b973e devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (978) Jan 23 00:05:12.925774 kernel: BTRFS info (device dm-0): first mount of filesystem 97a43946-ed04-45c1-a355-c0350e8b973e Jan 23 00:05:12.925861 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:05:13.053450 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 00:05:13.053526 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:05:13.054894 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:05:13.071855 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:05:13.077459 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:05:13.082058 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:05:13.083721 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:05:13.100526 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:05:13.155154 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1013) Jan 23 00:05:13.161069 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:05:13.161205 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:05:13.180925 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 00:05:13.181021 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 00:05:13.190174 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:05:13.191756 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:05:13.206753 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:05:13.314146 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:05:13.325885 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:05:13.413750 systemd-networkd[1147]: lo: Link UP Jan 23 00:05:13.413782 systemd-networkd[1147]: lo: Gained carrier Jan 23 00:05:13.417881 systemd-networkd[1147]: Enumeration completed Jan 23 00:05:13.418276 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:05:13.419450 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:05:13.419459 systemd-networkd[1147]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:05:13.422928 systemd[1]: Reached target network.target - Network. Jan 23 00:05:13.443614 systemd-networkd[1147]: eth0: Link UP Jan 23 00:05:13.443622 systemd-networkd[1147]: eth0: Gained carrier Jan 23 00:05:13.443646 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:05:13.470246 systemd-networkd[1147]: eth0: DHCPv4 address 172.31.17.104/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 00:05:13.849466 ignition[1080]: Ignition 2.22.0 Jan 23 00:05:13.849501 ignition[1080]: Stage: fetch-offline Jan 23 00:05:13.853993 ignition[1080]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:05:13.854033 ignition[1080]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:05:13.860144 ignition[1080]: Ignition finished successfully Jan 23 00:05:13.867296 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:05:13.880048 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:05:13.929188 ignition[1159]: Ignition 2.22.0 Jan 23 00:05:13.929221 ignition[1159]: Stage: fetch Jan 23 00:05:13.929783 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:05:13.929809 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:05:13.930580 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:05:13.986841 ignition[1159]: PUT result: OK Jan 23 00:05:13.991618 ignition[1159]: parsed url from cmdline: "" Jan 23 00:05:13.991790 ignition[1159]: no config URL provided Jan 23 00:05:13.991812 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:05:13.991838 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:05:13.992055 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:05:14.002833 ignition[1159]: PUT result: OK Jan 23 00:05:14.003148 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 00:05:14.009097 ignition[1159]: GET result: OK Jan 23 00:05:14.009669 ignition[1159]: parsing config with SHA512: 64d66a651c88b6df712902fba5511976d68ebca18c50ed0fd0016ec214d274778821d8d1d2f35beab9a78c1d30cc7c41734fe360de8b8c26084b0a244972d825 Jan 23 00:05:14.026556 unknown[1159]: fetched base config from "system" Jan 23 00:05:14.027383 ignition[1159]: fetch: fetch complete Jan 23 00:05:14.026579 unknown[1159]: fetched base config from "system" Jan 23 00:05:14.027397 ignition[1159]: fetch: fetch passed Jan 23 00:05:14.026593 unknown[1159]: fetched user config from "aws" Jan 23 00:05:14.027502 ignition[1159]: Ignition finished successfully Jan 23 00:05:14.032842 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:05:14.044271 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:05:14.113822 ignition[1165]: Ignition 2.22.0 Jan 23 00:05:14.113859 ignition[1165]: Stage: kargs Jan 23 00:05:14.114856 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:05:14.115132 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:05:14.115345 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:05:14.118740 ignition[1165]: PUT result: OK Jan 23 00:05:14.134073 ignition[1165]: kargs: kargs passed Jan 23 00:05:14.134310 ignition[1165]: Ignition finished successfully Jan 23 00:05:14.140442 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:05:14.151408 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:05:14.210963 ignition[1172]: Ignition 2.22.0 Jan 23 00:05:14.211624 ignition[1172]: Stage: disks Jan 23 00:05:14.212351 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:05:14.212379 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:05:14.212601 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:05:14.224816 ignition[1172]: PUT result: OK Jan 23 00:05:14.234513 ignition[1172]: disks: disks passed Jan 23 00:05:14.234646 ignition[1172]: Ignition finished successfully Jan 23 00:05:14.242217 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:05:14.250389 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:05:14.257158 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:05:14.261095 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:05:14.270358 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:05:14.274072 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:05:14.284481 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:05:14.356614 systemd-fsck[1180]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 00:05:14.363680 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:05:14.373541 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:05:14.521165 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f31390ab-27e9-47d9-a374-053913301d53 r/w with ordered data mode. Quota mode: none. Jan 23 00:05:14.522676 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:05:14.528926 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:05:14.537264 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:05:14.542714 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:05:14.549812 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 00:05:14.549917 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:05:14.549971 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:05:14.603352 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:05:14.610995 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:05:14.635159 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1199) Jan 23 00:05:14.639751 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:05:14.639817 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:05:14.648181 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 00:05:14.648296 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 00:05:14.651025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:05:14.988746 initrd-setup-root[1223]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:05:14.998898 initrd-setup-root[1230]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:05:15.008666 initrd-setup-root[1237]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:05:15.017663 initrd-setup-root[1244]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:05:15.352789 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:05:15.361302 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:05:15.369615 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:05:15.404869 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:05:15.408654 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:05:15.444218 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:05:15.470156 ignition[1313]: INFO : Ignition 2.22.0 Jan 23 00:05:15.470156 ignition[1313]: INFO : Stage: mount Jan 23 00:05:15.477297 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:05:15.477297 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:05:15.477297 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:05:15.477297 ignition[1313]: INFO : PUT result: OK Jan 23 00:05:15.494545 ignition[1313]: INFO : mount: mount passed Jan 23 00:05:15.496747 ignition[1313]: INFO : Ignition finished successfully Jan 23 00:05:15.499368 systemd-networkd[1147]: eth0: Gained IPv6LL Jan 23 00:05:15.505189 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:05:15.513329 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:05:15.542009 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:05:15.584179 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1324) Jan 23 00:05:15.589453 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:05:15.589522 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:05:15.596847 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 00:05:15.596951 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 00:05:15.600351 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:05:15.656157 ignition[1340]: INFO : Ignition 2.22.0 Jan 23 00:05:15.656157 ignition[1340]: INFO : Stage: files Jan 23 00:05:15.661048 ignition[1340]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:05:15.661048 ignition[1340]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:05:15.661048 ignition[1340]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:05:15.671753 ignition[1340]: INFO : PUT result: OK Jan 23 00:05:15.679510 ignition[1340]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:05:15.695636 ignition[1340]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:05:15.695636 ignition[1340]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:05:15.721146 ignition[1340]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:05:15.726419 ignition[1340]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:05:15.731464 unknown[1340]: wrote ssh authorized keys file for user: core Jan 23 00:05:15.734577 ignition[1340]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:05:15.738391 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 00:05:15.738391 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 00:05:15.822473 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:05:15.983498 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 00:05:15.983498 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:05:15.983498 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 23 00:05:16.216658 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 00:05:16.361190 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 00:05:16.361190 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:05:16.361190 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:05:16.377770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:05:16.377770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:05:16.377770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:05:16.377770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:05:16.377770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:05:16.377770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:05:16.418081 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:05:16.423878 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:05:16.429568 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:05:16.438021 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:05:16.438021 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:05:16.438021 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 00:05:16.770291 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 00:05:17.159906 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 00:05:17.159906 ignition[1340]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 00:05:17.186710 ignition[1340]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:05:17.193079 ignition[1340]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:05:17.193079 ignition[1340]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 00:05:17.193079 ignition[1340]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:05:17.193079 ignition[1340]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:05:17.193079 ignition[1340]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:05:17.193079 ignition[1340]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:05:17.193079 ignition[1340]: INFO : files: files passed Jan 23 00:05:17.193079 ignition[1340]: INFO : Ignition finished successfully Jan 23 00:05:17.208812 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:05:17.221443 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:05:17.236735 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:05:17.263710 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:05:17.265718 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:05:17.288193 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:05:17.288193 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:05:17.299846 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:05:17.307117 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:05:17.307632 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:05:17.318664 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:05:17.427044 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:05:17.427471 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:05:17.435074 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:05:17.443458 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:05:17.448991 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:05:17.450540 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:05:17.497324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:05:17.504054 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:05:17.546953 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:05:17.550937 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:05:17.560249 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:05:17.565275 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:05:17.565534 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:05:17.575288 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:05:17.581661 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:05:17.586939 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:05:17.590013 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:05:17.598556 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:05:17.602551 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:05:17.610803 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:05:17.613648 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:05:17.622430 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:05:17.625645 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:05:17.633214 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:05:17.635635 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:05:17.635898 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:05:17.645861 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:05:17.648942 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:05:17.657935 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:05:17.660716 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:05:17.664192 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:05:17.664452 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:05:17.676074 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:05:17.676765 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:05:17.686622 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:05:17.686895 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:05:17.695691 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:05:17.701881 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:05:17.702226 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:05:17.716828 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:05:17.719725 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:05:17.720032 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:05:17.728712 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:05:17.729129 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:05:17.755967 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:05:17.756539 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:05:17.801160 ignition[1395]: INFO : Ignition 2.22.0 Jan 23 00:05:17.801160 ignition[1395]: INFO : Stage: umount Jan 23 00:05:17.801160 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:05:17.805527 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:05:17.814080 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:05:17.823220 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:05:17.827348 ignition[1395]: INFO : PUT result: OK Jan 23 00:05:17.838874 ignition[1395]: INFO : umount: umount passed Jan 23 00:05:17.839624 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:05:17.841629 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:05:17.850183 ignition[1395]: INFO : Ignition finished successfully Jan 23 00:05:17.856289 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:05:17.856684 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:05:17.865462 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:05:17.866224 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:05:17.871493 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:05:17.871595 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:05:17.879876 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:05:17.879980 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:05:17.888180 systemd[1]: Stopped target network.target - Network. Jan 23 00:05:17.895559 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:05:17.895836 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:05:17.906081 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:05:17.909400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:05:17.914191 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:05:17.917618 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:05:17.920280 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:05:17.928841 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:05:17.928920 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:05:17.932652 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:05:17.932729 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:05:17.937439 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:05:17.937552 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:05:17.940671 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:05:17.940764 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:05:17.948625 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:05:17.948737 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:05:17.953564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:05:17.962318 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:05:17.989487 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:05:17.993820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:05:18.005954 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:05:18.009915 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:05:18.010336 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:05:18.022435 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:05:18.023732 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:05:18.029949 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:05:18.030033 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:05:18.034477 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:05:18.036807 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:05:18.036945 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:05:18.046165 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:05:18.046310 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:05:18.075933 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:05:18.076047 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:05:18.079422 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:05:18.079537 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:05:18.097289 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:05:18.105013 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:05:18.105681 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:05:18.134852 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:05:18.135448 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:05:18.145753 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:05:18.146161 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:05:18.157581 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:05:18.157722 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:05:18.161677 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:05:18.161752 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:05:18.165163 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:05:18.165260 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:05:18.178946 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:05:18.179049 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:05:18.189036 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:05:18.189182 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:05:18.196078 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:05:18.212313 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:05:18.215366 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:05:18.224937 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:05:18.225049 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:05:18.235076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:05:18.235234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:05:18.241444 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:05:18.241569 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:05:18.241657 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:05:18.259284 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:05:18.259488 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:05:18.281334 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:05:18.286070 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:05:18.323961 systemd[1]: Switching root. Jan 23 00:05:18.397254 systemd-journald[258]: Journal stopped Jan 23 00:05:21.514460 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Jan 23 00:05:21.514618 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:05:21.517769 kernel: SELinux: policy capability open_perms=1 Jan 23 00:05:21.517827 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:05:21.517861 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:05:21.518213 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:05:21.518257 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:05:21.518290 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:05:21.518325 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:05:21.518357 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:05:21.518387 kernel: audit: type=1403 audit(1769126719.128:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:05:21.518426 systemd[1]: Successfully loaded SELinux policy in 167.632ms. Jan 23 00:05:21.518486 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 40.043ms. Jan 23 00:05:21.518523 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:05:21.518556 systemd[1]: Detected virtualization amazon. Jan 23 00:05:21.518587 systemd[1]: Detected architecture arm64. Jan 23 00:05:21.518619 systemd[1]: Detected first boot. Jan 23 00:05:21.518682 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:05:21.518719 zram_generator::config[1438]: No configuration found. Jan 23 00:05:21.518752 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:05:21.518790 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:05:21.518825 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:05:21.518859 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:05:21.518891 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:05:21.518919 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:05:21.518954 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:05:21.518988 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:05:21.519017 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:05:21.519059 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:05:21.519098 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:05:21.523580 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:05:21.523618 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:05:21.523651 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:05:21.523681 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:05:21.523714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:05:21.523746 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:05:21.523778 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:05:21.523808 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:05:21.523849 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:05:21.523878 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 00:05:21.523910 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:05:21.523939 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:05:21.523967 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:05:21.523999 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:05:21.524030 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:05:21.524066 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:05:21.524095 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:05:21.524166 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:05:21.524200 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:05:21.524240 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:05:21.524275 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:05:21.524304 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:05:21.524335 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:05:21.524365 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:05:21.524397 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:05:21.524438 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:05:21.524473 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:05:21.524511 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:05:21.524540 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:05:21.524571 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:05:21.530036 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:05:21.530068 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:05:21.533562 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:05:21.533662 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:05:21.533697 systemd[1]: Reached target machines.target - Containers. Jan 23 00:05:21.533738 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:05:21.533773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:05:21.533804 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:05:21.533837 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:05:21.533869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:05:21.533899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:05:21.533934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:05:21.533970 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:05:21.534000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:05:21.534030 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:05:21.534061 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:05:21.534092 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:05:21.541991 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:05:21.542051 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:05:21.542097 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:05:21.546244 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:05:21.546288 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:05:21.546320 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:05:21.546354 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:05:21.546384 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:05:21.546417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:05:21.546449 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:05:21.546479 systemd[1]: Stopped verity-setup.service. Jan 23 00:05:21.546518 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:05:21.546552 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:05:21.546591 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:05:21.546622 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:05:21.546680 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:05:21.546719 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:05:21.546751 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:05:21.546781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:05:21.546827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:05:21.546859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:05:21.546888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:05:21.546926 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:05:21.546958 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:05:21.546988 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:05:21.547017 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:05:21.547050 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:05:21.547081 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:05:21.547148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:05:21.547185 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:05:21.547221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:05:21.547261 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:05:21.547291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:05:21.547386 systemd-journald[1517]: Collecting audit messages is disabled. Jan 23 00:05:21.547449 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:05:21.547483 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:05:21.547516 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:05:21.547548 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:05:21.547586 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:05:21.547617 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:05:21.547653 kernel: fuse: init (API version 7.41) Jan 23 00:05:21.547683 kernel: loop: module loaded Jan 23 00:05:21.547713 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:05:21.547749 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:05:21.547779 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:05:21.547810 systemd-journald[1517]: Journal started Jan 23 00:05:21.547864 systemd-journald[1517]: Runtime Journal (/run/log/journal/ec29e5bebaf044cd1652ced04e95f37f) is 8M, max 75.3M, 67.3M free. Jan 23 00:05:20.697939 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:05:21.563529 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:05:20.721857 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 00:05:20.722878 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:05:21.566483 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:05:21.576955 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:05:21.585323 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:05:21.598621 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:05:21.600485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:05:21.629653 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:05:21.634003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:05:21.679513 kernel: loop0: detected capacity change from 0 to 207008 Jan 23 00:05:21.676445 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:05:21.697867 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:05:21.707434 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:05:21.730053 systemd-journald[1517]: Time spent on flushing to /var/log/journal/ec29e5bebaf044cd1652ced04e95f37f is 102.484ms for 928 entries. Jan 23 00:05:21.730053 systemd-journald[1517]: System Journal (/var/log/journal/ec29e5bebaf044cd1652ced04e95f37f) is 8M, max 195.6M, 187.6M free. Jan 23 00:05:21.870453 systemd-journald[1517]: Received client request to flush runtime journal. Jan 23 00:05:21.870579 kernel: ACPI: bus type drm_connector registered Jan 23 00:05:21.870677 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:05:21.870724 kernel: loop1: detected capacity change from 0 to 61264 Jan 23 00:05:21.735067 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:05:21.744792 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:05:21.789052 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:05:21.808731 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:05:21.809382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:05:21.839078 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:05:21.846864 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:05:21.879307 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:05:21.959047 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:05:21.998254 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:05:22.009661 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:05:22.044272 kernel: loop2: detected capacity change from 0 to 119840 Jan 23 00:05:22.090807 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jan 23 00:05:22.090852 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Jan 23 00:05:22.100077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:05:22.180704 kernel: loop3: detected capacity change from 0 to 100632 Jan 23 00:05:22.330535 kernel: loop4: detected capacity change from 0 to 207008 Jan 23 00:05:22.368165 kernel: loop5: detected capacity change from 0 to 61264 Jan 23 00:05:22.392197 kernel: loop6: detected capacity change from 0 to 119840 Jan 23 00:05:22.418426 kernel: loop7: detected capacity change from 0 to 100632 Jan 23 00:05:22.445581 (sd-merge)[1599]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 00:05:22.449604 (sd-merge)[1599]: Merged extensions into '/usr'. Jan 23 00:05:22.460659 systemd[1]: Reload requested from client PID 1541 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:05:22.460891 systemd[1]: Reloading... Jan 23 00:05:22.686167 zram_generator::config[1625]: No configuration found. Jan 23 00:05:23.361866 systemd[1]: Reloading finished in 900 ms. Jan 23 00:05:23.387210 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:05:23.395497 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:05:23.412989 systemd[1]: Starting ensure-sysext.service... Jan 23 00:05:23.424490 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:05:23.439736 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:05:23.450455 ldconfig[1533]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:05:23.480233 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:05:23.492984 systemd[1]: Reload requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:05:23.493029 systemd[1]: Reloading... Jan 23 00:05:23.516838 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:05:23.518404 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:05:23.519711 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:05:23.522377 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:05:23.531408 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:05:23.532073 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jan 23 00:05:23.532264 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Jan 23 00:05:23.548541 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:05:23.551383 systemd-tmpfiles[1678]: Skipping /boot Jan 23 00:05:23.585548 systemd-udevd[1679]: Using default interface naming scheme 'v255'. Jan 23 00:05:23.607661 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:05:23.609211 systemd-tmpfiles[1678]: Skipping /boot Jan 23 00:05:23.791774 zram_generator::config[1731]: No configuration found. Jan 23 00:05:24.113769 (udev-worker)[1721]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:05:24.453643 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 00:05:24.456559 systemd[1]: Reloading finished in 962 ms. Jan 23 00:05:24.479635 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:05:24.510275 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:05:24.554393 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:05:24.568578 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:05:24.576593 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:05:24.586674 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:05:24.602902 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:05:24.611565 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:05:24.627478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:05:24.631777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:05:24.641447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:05:24.651066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:05:24.655252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:05:24.655561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:05:24.667802 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:05:24.676353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:05:24.676794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:05:24.677026 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:05:24.686987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:05:24.699768 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:05:24.703100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:05:24.703407 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:05:24.703786 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:05:24.718146 systemd[1]: Finished ensure-sysext.service. Jan 23 00:05:24.809461 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:05:24.891013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:05:24.905568 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:05:24.910481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:05:24.912312 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:05:24.918552 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:05:24.920320 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:05:24.936683 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:05:24.939097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:05:24.948868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:05:24.949706 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:05:24.977901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:05:24.980435 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:05:24.984518 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:05:24.992603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:05:24.998346 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:05:25.016160 augenrules[1895]: No rules Jan 23 00:05:25.023371 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:05:25.024811 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:05:25.068092 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:05:25.324246 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:05:25.341583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 00:05:25.347297 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:05:25.365436 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:05:25.413997 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:05:25.522176 systemd-resolved[1818]: Positive Trust Anchors: Jan 23 00:05:25.522805 systemd-resolved[1818]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:05:25.523013 systemd-resolved[1818]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:05:25.537290 systemd-networkd[1817]: lo: Link UP Jan 23 00:05:25.537316 systemd-networkd[1817]: lo: Gained carrier Jan 23 00:05:25.541658 systemd-networkd[1817]: Enumeration completed Jan 23 00:05:25.541851 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:05:25.548706 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:05:25.548739 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:05:25.549561 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:05:25.552922 systemd-resolved[1818]: Defaulting to hostname 'linux'. Jan 23 00:05:25.558586 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:05:25.562492 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:05:25.567092 systemd[1]: Reached target network.target - Network. Jan 23 00:05:25.570193 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:05:25.574939 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:05:25.578815 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:05:25.586783 systemd-networkd[1817]: eth0: Link UP Jan 23 00:05:25.587393 systemd-networkd[1817]: eth0: Gained carrier Jan 23 00:05:25.587437 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:05:25.589200 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:05:25.595502 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:05:25.599174 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:05:25.603092 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:05:25.610906 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:05:25.610988 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:05:25.614037 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:05:25.619383 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:05:25.630440 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:05:25.637230 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.17.104/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 00:05:25.638699 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:05:25.644339 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:05:25.648340 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:05:25.665221 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:05:25.669207 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:05:25.673844 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:05:25.677878 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:05:25.680744 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:05:25.683242 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:05:25.683471 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:05:25.692357 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:05:25.701806 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:05:25.710579 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:05:25.720497 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:05:25.727492 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:05:25.738558 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:05:25.742333 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:05:25.751518 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:05:25.757050 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 00:05:25.764485 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:05:25.774680 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 00:05:25.781630 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:05:25.791582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:05:25.812472 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:05:25.818347 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:05:25.819360 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:05:25.828683 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:05:25.845683 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:05:25.853401 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:05:25.872229 jq[1965]: false Jan 23 00:05:25.875795 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:05:25.905071 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:05:25.906303 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:05:25.918969 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:05:25.921310 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:05:25.936855 jq[1975]: true Jan 23 00:05:25.968044 jq[1992]: true Jan 23 00:05:25.994604 extend-filesystems[1966]: Found /dev/nvme0n1p6 Jan 23 00:05:26.024615 dbus-daemon[1963]: [system] SELinux support is enabled Jan 23 00:05:26.024986 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:05:26.043171 extend-filesystems[1966]: Found /dev/nvme0n1p9 Jan 23 00:05:26.037054 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:05:26.037167 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:05:26.041373 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:05:26.041409 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:05:26.076937 (ntainerd)[2007]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:05:26.097693 extend-filesystems[1966]: Checking size of /dev/nvme0n1p9 Jan 23 00:05:26.100986 dbus-daemon[1963]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 00:05:26.132310 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:05:26.149698 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 00:05:26.153452 tar[1981]: linux-arm64/LICENSE Jan 23 00:05:26.161918 tar[1981]: linux-arm64/helm Jan 23 00:05:26.181714 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:05:26.184278 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:05:26.251816 extend-filesystems[1966]: Resized partition /dev/nvme0n1p9 Jan 23 00:05:26.292490 update_engine[1974]: I20260123 00:05:26.292017 1974 main.cc:92] Flatcar Update Engine starting Jan 23 00:05:26.310071 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:05:26.323335 update_engine[1974]: I20260123 00:05:26.310158 1974 update_check_scheduler.cc:74] Next update check in 3m3s Jan 23 00:05:26.317598 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:05:26.343345 coreos-metadata[1962]: Jan 23 00:05:26.341 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 00:05:26.343345 coreos-metadata[1962]: Jan 23 00:05:26.342 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 00:05:26.343928 coreos-metadata[1962]: Jan 23 00:05:26.343 INFO Fetch successful Jan 23 00:05:26.343928 coreos-metadata[1962]: Jan 23 00:05:26.343 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.344 INFO Fetch successful Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.344 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.345 INFO Fetch successful Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.345 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.346 INFO Fetch successful Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.346 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.347 INFO Fetch failed with 404: resource not found Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.348 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.349 INFO Fetch successful Jan 23 00:05:26.349420 coreos-metadata[1962]: Jan 23 00:05:26.349 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 00:05:26.361973 coreos-metadata[1962]: Jan 23 00:05:26.351 INFO Fetch successful Jan 23 00:05:26.361973 coreos-metadata[1962]: Jan 23 00:05:26.351 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 00:05:26.361973 coreos-metadata[1962]: Jan 23 00:05:26.352 INFO Fetch successful Jan 23 00:05:26.361973 coreos-metadata[1962]: Jan 23 00:05:26.352 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 00:05:26.361973 coreos-metadata[1962]: Jan 23 00:05:26.353 INFO Fetch successful Jan 23 00:05:26.361973 coreos-metadata[1962]: Jan 23 00:05:26.353 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 00:05:26.361973 coreos-metadata[1962]: Jan 23 00:05:26.354 INFO Fetch successful Jan 23 00:05:26.369288 extend-filesystems[2039]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:05:26.370597 ntpd[1968]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:36:07 UTC 2026 (1): Starting Jan 23 00:05:26.376523 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:36:07 UTC 2026 (1): Starting Jan 23 00:05:26.376523 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:05:26.376523 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: ---------------------------------------------------- Jan 23 00:05:26.376523 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:05:26.376523 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:05:26.376523 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: corporation. Support and training for ntp-4 are Jan 23 00:05:26.376523 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: available at https://www.nwtime.org/support Jan 23 00:05:26.376523 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: ---------------------------------------------------- Jan 23 00:05:26.388249 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 00:05:26.370740 ntpd[1968]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:05:26.370762 ntpd[1968]: ---------------------------------------------------- Jan 23 00:05:26.370779 ntpd[1968]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:05:26.370797 ntpd[1968]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:05:26.370814 ntpd[1968]: corporation. Support and training for ntp-4 are Jan 23 00:05:26.370832 ntpd[1968]: available at https://www.nwtime.org/support Jan 23 00:05:26.370848 ntpd[1968]: ---------------------------------------------------- Jan 23 00:05:26.406219 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: proto: precision = 0.096 usec (-23) Jan 23 00:05:26.402885 ntpd[1968]: proto: precision = 0.096 usec (-23) Jan 23 00:05:26.407264 ntpd[1968]: basedate set to 2026-01-10 Jan 23 00:05:26.409089 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: basedate set to 2026-01-10 Jan 23 00:05:26.409089 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: gps base set to 2026-01-11 (week 2401) Jan 23 00:05:26.409089 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:05:26.409089 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:05:26.409089 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:05:26.409089 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: Listen normally on 3 eth0 172.31.17.104:123 Jan 23 00:05:26.409089 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: Listen normally on 4 lo [::1]:123 Jan 23 00:05:26.409089 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: bind(21) AF_INET6 [fe80::4cd:58ff:fe72:2bbf%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 00:05:26.407307 ntpd[1968]: gps base set to 2026-01-11 (week 2401) Jan 23 00:05:26.407511 ntpd[1968]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:05:26.407563 ntpd[1968]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:05:26.407909 ntpd[1968]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:05:26.407958 ntpd[1968]: Listen normally on 3 eth0 172.31.17.104:123 Jan 23 00:05:26.408007 ntpd[1968]: Listen normally on 4 lo [::1]:123 Jan 23 00:05:26.408054 ntpd[1968]: bind(21) AF_INET6 [fe80::4cd:58ff:fe72:2bbf%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 00:05:26.408094 ntpd[1968]: unable to create socket on eth0 (5) for [fe80::4cd:58ff:fe72:2bbf%2]:123 Jan 23 00:05:26.416184 ntpd[1968]: 23 Jan 00:05:26 ntpd[1968]: unable to create socket on eth0 (5) for [fe80::4cd:58ff:fe72:2bbf%2]:123 Jan 23 00:05:26.424253 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 00:05:26.434908 systemd-coredump[2043]: Process 1968 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 00:05:26.444675 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 00:05:26.569987 systemd[1]: Started systemd-coredump@0-2043-0.service - Process Core Dump (PID 2043/UID 0). Jan 23 00:05:26.639477 bash[2040]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:05:26.635686 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:05:26.656345 systemd[1]: Starting sshkeys.service... Jan 23 00:05:26.689208 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 00:05:26.696969 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:05:26.702504 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:05:26.712494 extend-filesystems[2039]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 00:05:26.712494 extend-filesystems[2039]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 00:05:26.712494 extend-filesystems[2039]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 00:05:26.739032 extend-filesystems[1966]: Resized filesystem in /dev/nvme0n1p9 Jan 23 00:05:26.728059 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:05:26.730604 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:05:26.764014 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 00:05:26.772858 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 00:05:26.779816 dbus-daemon[1963]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=2014 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 00:05:26.797574 systemd-logind[1973]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 00:05:26.797620 systemd-logind[1973]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 00:05:26.801213 systemd-logind[1973]: New seat seat0. Jan 23 00:05:26.804401 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 00:05:26.810856 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:05:26.852172 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 00:05:26.867513 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 00:05:27.023635 locksmithd[2033]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:05:27.176790 containerd[2007]: time="2026-01-23T00:05:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:05:27.181135 containerd[2007]: time="2026-01-23T00:05:27.180546408Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:05:27.292194 containerd[2007]: time="2026-01-23T00:05:27.291975601Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.516µs" Jan 23 00:05:27.292194 containerd[2007]: time="2026-01-23T00:05:27.292056457Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:05:27.300276 containerd[2007]: time="2026-01-23T00:05:27.300168109Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:05:27.303764 containerd[2007]: time="2026-01-23T00:05:27.303457885Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:05:27.306094 coreos-metadata[2093]: Jan 23 00:05:27.305 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 00:05:27.308141 containerd[2007]: time="2026-01-23T00:05:27.303764005Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:05:27.308141 containerd[2007]: time="2026-01-23T00:05:27.307516573Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:05:27.308141 containerd[2007]: time="2026-01-23T00:05:27.307858285Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:05:27.308141 containerd[2007]: time="2026-01-23T00:05:27.307890205Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:05:27.308630 coreos-metadata[2093]: Jan 23 00:05:27.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 00:05:27.310238 coreos-metadata[2093]: Jan 23 00:05:27.310 INFO Fetch successful Jan 23 00:05:27.310238 coreos-metadata[2093]: Jan 23 00:05:27.310 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 00:05:27.312496 containerd[2007]: time="2026-01-23T00:05:27.311457157Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:05:27.312496 containerd[2007]: time="2026-01-23T00:05:27.311836117Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:05:27.312496 containerd[2007]: time="2026-01-23T00:05:27.311903209Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:05:27.312707 coreos-metadata[2093]: Jan 23 00:05:27.311 INFO Fetch successful Jan 23 00:05:27.314990 unknown[2093]: wrote ssh authorized keys file for user: core Jan 23 00:05:27.315424 containerd[2007]: time="2026-01-23T00:05:27.311927497Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:05:27.315936 containerd[2007]: time="2026-01-23T00:05:27.315470989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:05:27.321747 containerd[2007]: time="2026-01-23T00:05:27.320038981Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:05:27.321747 containerd[2007]: time="2026-01-23T00:05:27.320239993Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:05:27.321747 containerd[2007]: time="2026-01-23T00:05:27.320566477Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:05:27.321747 containerd[2007]: time="2026-01-23T00:05:27.320727133Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:05:27.324715 containerd[2007]: time="2026-01-23T00:05:27.324624013Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:05:27.328805 containerd[2007]: time="2026-01-23T00:05:27.324891193Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:05:27.336704 containerd[2007]: time="2026-01-23T00:05:27.336625753Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:05:27.336814 containerd[2007]: time="2026-01-23T00:05:27.336746581Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:05:27.336814 containerd[2007]: time="2026-01-23T00:05:27.336781321Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:05:27.336897 containerd[2007]: time="2026-01-23T00:05:27.336814489Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:05:27.336897 containerd[2007]: time="2026-01-23T00:05:27.336855829Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:05:27.336897 containerd[2007]: time="2026-01-23T00:05:27.336888145Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:05:27.337052 containerd[2007]: time="2026-01-23T00:05:27.336923641Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:05:27.337052 containerd[2007]: time="2026-01-23T00:05:27.336952201Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:05:27.337052 containerd[2007]: time="2026-01-23T00:05:27.336983653Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:05:27.337052 containerd[2007]: time="2026-01-23T00:05:27.337010005Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:05:27.337052 containerd[2007]: time="2026-01-23T00:05:27.337033153Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:05:27.337288 containerd[2007]: time="2026-01-23T00:05:27.337062925Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.337333513Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.337394641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.337429561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.337621753Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.337722541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.337773709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.338703637Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.338753593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.338788417Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.338816389Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:05:27.339273 containerd[2007]: time="2026-01-23T00:05:27.338844085Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:05:27.344152 containerd[2007]: time="2026-01-23T00:05:27.343164889Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:05:27.344152 containerd[2007]: time="2026-01-23T00:05:27.343257001Z" level=info msg="Start snapshots syncer" Jan 23 00:05:27.344152 containerd[2007]: time="2026-01-23T00:05:27.343336873Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:05:27.344381 containerd[2007]: time="2026-01-23T00:05:27.343820149Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:05:27.344381 containerd[2007]: time="2026-01-23T00:05:27.343923541Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:05:27.344381 containerd[2007]: time="2026-01-23T00:05:27.344028073Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.350676361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.350768053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.350811769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.350844625Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.350878093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.350906641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.350934685Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.350989093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.351018517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.351047365Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:05:27.352352 containerd[2007]: time="2026-01-23T00:05:27.351154789Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353152621Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353233129Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353275297Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353300101Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353331121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353360689Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353543053Z" level=info msg="runtime interface created" Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353563225Z" level=info msg="created NRI interface" Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353587153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353619397Z" level=info msg="Connect containerd service" Jan 23 00:05:27.354154 containerd[2007]: time="2026-01-23T00:05:27.353682241Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:05:27.367246 containerd[2007]: time="2026-01-23T00:05:27.367074337Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:05:27.400409 systemd-networkd[1817]: eth0: Gained IPv6LL Jan 23 00:05:27.457967 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:05:27.513053 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:05:27.517704 update-ssh-keys[2150]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:05:27.527753 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 00:05:27.540080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:05:27.557167 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:05:27.584413 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 00:05:27.609691 systemd[1]: Finished sshkeys.service. Jan 23 00:05:27.763979 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:05:27.963487 systemd-coredump[2061]: Process 1968 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1968: #0 0x0000aaaab64c0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab646fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab6470240 n/a (ntpd + 0x10240) #3 0x0000aaaab646be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab646d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab6475a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab646738c n/a (ntpd + 0x738c) #7 0x0000ffff85cd2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff85cd2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab64673f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Jan 23 00:05:27.973829 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 00:05:27.975038 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 00:05:27.993076 systemd[1]: systemd-coredump@0-2043-0.service: Deactivated successfully. Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006213504Z" level=info msg="Start subscribing containerd event" Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006340932Z" level=info msg="Start recovering state" Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006479832Z" level=info msg="Start event monitor" Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006504696Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006522360Z" level=info msg="Start streaming server" Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006542064Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006558312Z" level=info msg="runtime interface starting up..." Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006573072Z" level=info msg="starting plugins..." Jan 23 00:05:28.008303 containerd[2007]: time="2026-01-23T00:05:28.006600324Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:05:28.011007 containerd[2007]: time="2026-01-23T00:05:28.009441048Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:05:28.011007 containerd[2007]: time="2026-01-23T00:05:28.009615336Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:05:28.011007 containerd[2007]: time="2026-01-23T00:05:28.009883800Z" level=info msg="containerd successfully booted in 0.839520s" Jan 23 00:05:28.009965 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:05:28.066528 amazon-ssm-agent[2163]: Initializing new seelog logger Jan 23 00:05:28.066528 amazon-ssm-agent[2163]: New Seelog Logger Creation Complete Jan 23 00:05:28.069293 amazon-ssm-agent[2163]: 2026/01/23 00:05:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:28.069293 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:28.073664 amazon-ssm-agent[2163]: 2026/01/23 00:05:28 processing appconfig overrides Jan 23 00:05:28.075158 amazon-ssm-agent[2163]: 2026/01/23 00:05:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:28.075158 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:28.075158 amazon-ssm-agent[2163]: 2026/01/23 00:05:28 processing appconfig overrides Jan 23 00:05:28.075158 amazon-ssm-agent[2163]: 2026/01/23 00:05:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:28.075158 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:28.075158 amazon-ssm-agent[2163]: 2026/01/23 00:05:28 processing appconfig overrides Jan 23 00:05:28.079834 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.0740 INFO Proxy environment variables: Jan 23 00:05:28.085128 amazon-ssm-agent[2163]: 2026/01/23 00:05:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:28.092367 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:28.092367 amazon-ssm-agent[2163]: 2026/01/23 00:05:28 processing appconfig overrides Jan 23 00:05:28.090524 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 00:05:28.087704 polkitd[2079]: Started polkitd version 126 Jan 23 00:05:28.097170 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 00:05:28.113646 polkitd[2079]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 00:05:28.116577 polkitd[2079]: Loading rules from directory /run/polkit-1/rules.d Jan 23 00:05:28.116707 polkitd[2079]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 00:05:28.117485 polkitd[2079]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 00:05:28.117585 polkitd[2079]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 00:05:28.117673 polkitd[2079]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 00:05:28.121925 polkitd[2079]: Finished loading, compiling and executing 2 rules Jan 23 00:05:28.122856 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 00:05:28.135432 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 00:05:28.138291 polkitd[2079]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 00:05:28.179521 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.0741 INFO https_proxy: Jan 23 00:05:28.206531 systemd-hostnamed[2014]: Hostname set to (transient) Jan 23 00:05:28.208465 systemd-resolved[1818]: System hostname changed to 'ip-172-31-17-104'. Jan 23 00:05:28.218704 ntpd[2206]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:36:07 UTC 2026 (1): Starting Jan 23 00:05:28.219994 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:36:07 UTC 2026 (1): Starting Jan 23 00:05:28.219994 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:05:28.219994 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: ---------------------------------------------------- Jan 23 00:05:28.219994 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:05:28.219994 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:05:28.219994 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: corporation. Support and training for ntp-4 are Jan 23 00:05:28.219994 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: available at https://www.nwtime.org/support Jan 23 00:05:28.219994 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: ---------------------------------------------------- Jan 23 00:05:28.218818 ntpd[2206]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:05:28.218837 ntpd[2206]: ---------------------------------------------------- Jan 23 00:05:28.218854 ntpd[2206]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:05:28.224316 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: proto: precision = 0.096 usec (-23) Jan 23 00:05:28.218869 ntpd[2206]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:05:28.218885 ntpd[2206]: corporation. Support and training for ntp-4 are Jan 23 00:05:28.218901 ntpd[2206]: available at https://www.nwtime.org/support Jan 23 00:05:28.218917 ntpd[2206]: ---------------------------------------------------- Jan 23 00:05:28.222050 ntpd[2206]: proto: precision = 0.096 usec (-23) Jan 23 00:05:28.229519 ntpd[2206]: basedate set to 2026-01-10 Jan 23 00:05:28.229812 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: basedate set to 2026-01-10 Jan 23 00:05:28.229812 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: gps base set to 2026-01-11 (week 2401) Jan 23 00:05:28.229812 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:05:28.229812 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:05:28.229563 ntpd[2206]: gps base set to 2026-01-11 (week 2401) Jan 23 00:05:28.229732 ntpd[2206]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:05:28.230152 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:05:28.229781 ntpd[2206]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:05:28.230059 ntpd[2206]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:05:28.231143 ntpd[2206]: Listen normally on 3 eth0 172.31.17.104:123 Jan 23 00:05:28.231733 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Listen normally on 3 eth0 172.31.17.104:123 Jan 23 00:05:28.231733 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Listen normally on 4 lo [::1]:123 Jan 23 00:05:28.231733 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Listen normally on 5 eth0 [fe80::4cd:58ff:fe72:2bbf%2]:123 Jan 23 00:05:28.231733 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: Listening on routing socket on fd #22 for interface updates Jan 23 00:05:28.231267 ntpd[2206]: Listen normally on 4 lo [::1]:123 Jan 23 00:05:28.231314 ntpd[2206]: Listen normally on 5 eth0 [fe80::4cd:58ff:fe72:2bbf%2]:123 Jan 23 00:05:28.231359 ntpd[2206]: Listening on routing socket on fd #22 for interface updates Jan 23 00:05:28.254264 ntpd[2206]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:05:28.255997 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:05:28.255997 ntpd[2206]: 23 Jan 00:05:28 ntpd[2206]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:05:28.254326 ntpd[2206]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:05:28.279083 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.0741 INFO http_proxy: Jan 23 00:05:28.379886 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.0741 INFO no_proxy: Jan 23 00:05:28.478229 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.0743 INFO Checking if agent identity type OnPrem can be assumed Jan 23 00:05:28.576827 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.0744 INFO Checking if agent identity type EC2 can be assumed Jan 23 00:05:28.603268 tar[1981]: linux-arm64/README.md Jan 23 00:05:28.651845 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:05:28.676013 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3391 INFO Agent will take identity from EC2 Jan 23 00:05:28.707362 sshd_keygen[1999]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:05:28.775591 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3456 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 00:05:28.776379 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:05:28.789834 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:05:28.799439 systemd[1]: Started sshd@0-172.31.17.104:22-4.153.228.146:52298.service - OpenSSH per-connection server daemon (4.153.228.146:52298). Jan 23 00:05:28.841784 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:05:28.844161 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:05:28.853953 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:05:28.875146 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3457 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 00:05:28.919041 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:05:28.931953 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:05:28.940875 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 00:05:28.946953 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:05:28.975442 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3457 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 00:05:29.076363 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3457 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 00:05:29.177748 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3457 INFO [Registrar] Starting registrar module Jan 23 00:05:29.276566 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3492 INFO [EC2Identity] Checking disk for registration info Jan 23 00:05:29.376965 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3493 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 00:05:29.466568 sshd[2230]: Accepted publickey for core from 4.153.228.146 port 52298 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:05:29.470974 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:05:29.477335 amazon-ssm-agent[2163]: 2026-01-23 00:05:28.3493 INFO [EC2Identity] Generating registration keypair Jan 23 00:05:29.499474 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:05:29.507702 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:05:29.554254 systemd-logind[1973]: New session 1 of user core. Jan 23 00:05:29.574199 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:05:29.592098 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:05:29.620423 (systemd)[2242]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:05:29.629752 systemd-logind[1973]: New session c1 of user core. Jan 23 00:05:29.710605 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.7103 INFO [EC2Identity] Checking write access before registering Jan 23 00:05:29.776035 amazon-ssm-agent[2163]: 2026/01/23 00:05:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:29.777125 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:05:29.777125 amazon-ssm-agent[2163]: 2026/01/23 00:05:29 processing appconfig overrides Jan 23 00:05:29.811287 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.7132 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 00:05:29.842254 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.7756 INFO [EC2Identity] EC2 registration was successful. Jan 23 00:05:29.842254 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.7756 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 00:05:29.842254 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.7757 INFO [CredentialRefresher] credentialRefresher has started Jan 23 00:05:29.843422 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.7758 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 00:05:29.843422 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.8414 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 00:05:29.843422 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.8419 INFO [CredentialRefresher] Credentials ready Jan 23 00:05:29.895398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:05:29.903896 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:05:29.911730 amazon-ssm-agent[2163]: 2026-01-23 00:05:29.8432 INFO [CredentialRefresher] Next credential rotation will be in 29.9999691243 minutes Jan 23 00:05:29.923832 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:05:29.980058 systemd[2242]: Queued start job for default target default.target. Jan 23 00:05:29.990076 systemd[2242]: Created slice app.slice - User Application Slice. Jan 23 00:05:29.990446 systemd[2242]: Reached target paths.target - Paths. Jan 23 00:05:29.990557 systemd[2242]: Reached target timers.target - Timers. Jan 23 00:05:29.996296 systemd[2242]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:05:30.026251 systemd[2242]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:05:30.027023 systemd[2242]: Reached target sockets.target - Sockets. Jan 23 00:05:30.028282 systemd[2242]: Reached target basic.target - Basic System. Jan 23 00:05:30.028972 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:05:30.030710 systemd[2242]: Reached target default.target - Main User Target. Jan 23 00:05:30.030930 systemd[2242]: Startup finished in 376ms. Jan 23 00:05:30.040498 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:05:30.044206 systemd[1]: Startup finished in 3.783s (kernel) + 10.337s (initrd) + 11.083s (userspace) = 25.204s. Jan 23 00:05:30.441478 systemd[1]: Started sshd@1-172.31.17.104:22-4.153.228.146:52306.service - OpenSSH per-connection server daemon (4.153.228.146:52306). Jan 23 00:05:30.914516 amazon-ssm-agent[2163]: 2026-01-23 00:05:30.9142 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 00:05:30.975151 kubelet[2253]: E0123 00:05:30.974550 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:05:30.981178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:05:30.981518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:05:30.983275 systemd[1]: kubelet.service: Consumed 1.543s CPU time, 255.5M memory peak. Jan 23 00:05:31.016776 amazon-ssm-agent[2163]: 2026-01-23 00:05:30.9319 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2274) started Jan 23 00:05:31.069807 sshd[2267]: Accepted publickey for core from 4.153.228.146 port 52306 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:05:31.073432 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:05:31.087001 systemd-logind[1973]: New session 2 of user core. Jan 23 00:05:31.096428 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:05:31.140665 amazon-ssm-agent[2163]: 2026-01-23 00:05:30.9319 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 00:05:31.447275 sshd[2280]: Connection closed by 4.153.228.146 port 52306 Jan 23 00:05:31.448634 sshd-session[2267]: pam_unix(sshd:session): session closed for user core Jan 23 00:05:31.458245 systemd[1]: sshd@1-172.31.17.104:22-4.153.228.146:52306.service: Deactivated successfully. Jan 23 00:05:31.459261 systemd-logind[1973]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:05:31.463596 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:05:31.470245 systemd-logind[1973]: Removed session 2. Jan 23 00:05:31.562058 systemd[1]: Started sshd@2-172.31.17.104:22-4.153.228.146:52318.service - OpenSSH per-connection server daemon (4.153.228.146:52318). Jan 23 00:05:32.133181 sshd[2292]: Accepted publickey for core from 4.153.228.146 port 52318 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:05:32.135510 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:05:32.143409 systemd-logind[1973]: New session 3 of user core. Jan 23 00:05:32.153356 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:05:32.512246 sshd[2295]: Connection closed by 4.153.228.146 port 52318 Jan 23 00:05:32.512425 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jan 23 00:05:32.520251 systemd[1]: sshd@2-172.31.17.104:22-4.153.228.146:52318.service: Deactivated successfully. Jan 23 00:05:32.525755 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:05:32.530415 systemd-logind[1973]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:05:32.533244 systemd-logind[1973]: Removed session 3. Jan 23 00:05:32.600564 systemd[1]: Started sshd@3-172.31.17.104:22-4.153.228.146:52326.service - OpenSSH per-connection server daemon (4.153.228.146:52326). Jan 23 00:05:33.115165 sshd[2301]: Accepted publickey for core from 4.153.228.146 port 52326 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:05:33.116767 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:05:33.126188 systemd-logind[1973]: New session 4 of user core. Jan 23 00:05:33.133418 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:05:33.469098 sshd[2304]: Connection closed by 4.153.228.146 port 52326 Jan 23 00:05:33.470414 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Jan 23 00:05:33.477621 systemd[1]: sshd@3-172.31.17.104:22-4.153.228.146:52326.service: Deactivated successfully. Jan 23 00:05:33.481555 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:05:33.485568 systemd-logind[1973]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:05:33.488896 systemd-logind[1973]: Removed session 4. Jan 23 00:05:33.570206 systemd[1]: Started sshd@4-172.31.17.104:22-4.153.228.146:52334.service - OpenSSH per-connection server daemon (4.153.228.146:52334). Jan 23 00:05:34.093715 sshd[2310]: Accepted publickey for core from 4.153.228.146 port 52334 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:05:34.095946 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:05:34.104758 systemd-logind[1973]: New session 5 of user core. Jan 23 00:05:34.114397 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:05:34.412206 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:05:34.412847 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:05:34.431213 sudo[2314]: pam_unix(sudo:session): session closed for user root Jan 23 00:05:34.509159 sshd[2313]: Connection closed by 4.153.228.146 port 52334 Jan 23 00:05:34.510739 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Jan 23 00:05:34.520643 systemd[1]: sshd@4-172.31.17.104:22-4.153.228.146:52334.service: Deactivated successfully. Jan 23 00:05:34.524697 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:05:34.528811 systemd-logind[1973]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:05:34.533230 systemd-logind[1973]: Removed session 5. Jan 23 00:05:34.617983 systemd[1]: Started sshd@5-172.31.17.104:22-4.153.228.146:35766.service - OpenSSH per-connection server daemon (4.153.228.146:35766). Jan 23 00:05:35.180274 sshd[2320]: Accepted publickey for core from 4.153.228.146 port 35766 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:05:35.182619 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:05:35.190476 systemd-logind[1973]: New session 6 of user core. Jan 23 00:05:35.200343 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:05:35.476885 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:05:35.478314 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:05:35.486809 sudo[2325]: pam_unix(sudo:session): session closed for user root Jan 23 00:05:35.496720 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:05:35.497768 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:05:35.515671 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:05:35.588014 augenrules[2347]: No rules Jan 23 00:05:35.590699 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:05:35.592285 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:05:35.594737 sudo[2324]: pam_unix(sudo:session): session closed for user root Jan 23 00:05:35.677942 sshd[2323]: Connection closed by 4.153.228.146 port 35766 Jan 23 00:05:35.678753 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Jan 23 00:05:35.687001 systemd[1]: sshd@5-172.31.17.104:22-4.153.228.146:35766.service: Deactivated successfully. Jan 23 00:05:35.690708 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:05:35.692598 systemd-logind[1973]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:05:35.695675 systemd-logind[1973]: Removed session 6. Jan 23 00:05:35.764271 systemd[1]: Started sshd@6-172.31.17.104:22-4.153.228.146:35782.service - OpenSSH per-connection server daemon (4.153.228.146:35782). Jan 23 00:05:36.289147 sshd[2356]: Accepted publickey for core from 4.153.228.146 port 35782 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:05:36.290620 sshd-session[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:05:36.300372 systemd-logind[1973]: New session 7 of user core. Jan 23 00:05:36.304381 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:05:36.568846 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:05:36.569889 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:05:37.331585 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:05:37.348637 (dockerd)[2378]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:05:37.909148 dockerd[2378]: time="2026-01-23T00:05:37.908786978Z" level=info msg="Starting up" Jan 23 00:05:37.911010 dockerd[2378]: time="2026-01-23T00:05:37.910966103Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:05:37.931964 dockerd[2378]: time="2026-01-23T00:05:37.931886462Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:05:38.260684 dockerd[2378]: time="2026-01-23T00:05:38.260313932Z" level=info msg="Loading containers: start." Jan 23 00:05:38.275163 kernel: Initializing XFRM netlink socket Jan 23 00:05:38.670973 (udev-worker)[2398]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:05:38.764945 systemd-networkd[1817]: docker0: Link UP Jan 23 00:05:38.770740 dockerd[2378]: time="2026-01-23T00:05:38.770665860Z" level=info msg="Loading containers: done." Jan 23 00:05:38.798190 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3091565512-merged.mount: Deactivated successfully. Jan 23 00:05:38.802903 dockerd[2378]: time="2026-01-23T00:05:38.802449236Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:05:38.802903 dockerd[2378]: time="2026-01-23T00:05:38.802562477Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:05:38.802903 dockerd[2378]: time="2026-01-23T00:05:38.802722168Z" level=info msg="Initializing buildkit" Jan 23 00:05:38.849073 dockerd[2378]: time="2026-01-23T00:05:38.849002057Z" level=info msg="Completed buildkit initialization" Jan 23 00:05:38.860000 dockerd[2378]: time="2026-01-23T00:05:38.859925848Z" level=info msg="Daemon has completed initialization" Jan 23 00:05:38.860365 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:05:38.861479 dockerd[2378]: time="2026-01-23T00:05:38.860183737Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:05:40.076824 containerd[2007]: time="2026-01-23T00:05:40.076766493Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 00:05:40.666677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738504716.mount: Deactivated successfully. Jan 23 00:05:41.232163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:05:41.235734 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:05:41.659549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:05:41.677908 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:05:41.771719 kubelet[2654]: E0123 00:05:41.771509 2654 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:05:41.779367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:05:41.779831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:05:41.782349 systemd[1]: kubelet.service: Consumed 358ms CPU time, 105.3M memory peak. Jan 23 00:05:42.267326 containerd[2007]: time="2026-01-23T00:05:42.267260908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:42.269773 containerd[2007]: time="2026-01-23T00:05:42.269672745Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 00:05:42.269967 containerd[2007]: time="2026-01-23T00:05:42.269907847Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:42.276518 containerd[2007]: time="2026-01-23T00:05:42.276433600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:42.281043 containerd[2007]: time="2026-01-23T00:05:42.280954388Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.203473321s" Jan 23 00:05:42.281043 containerd[2007]: time="2026-01-23T00:05:42.281031034Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 00:05:42.282124 containerd[2007]: time="2026-01-23T00:05:42.282057019Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 00:05:43.870241 containerd[2007]: time="2026-01-23T00:05:43.870158027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:43.873900 containerd[2007]: time="2026-01-23T00:05:43.873053887Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 00:05:43.873900 containerd[2007]: time="2026-01-23T00:05:43.873405218Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:43.883281 containerd[2007]: time="2026-01-23T00:05:43.883221180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:43.885624 containerd[2007]: time="2026-01-23T00:05:43.885559420Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.60324348s" Jan 23 00:05:43.885724 containerd[2007]: time="2026-01-23T00:05:43.885621936Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 00:05:43.886557 containerd[2007]: time="2026-01-23T00:05:43.886484074Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 00:05:45.292190 containerd[2007]: time="2026-01-23T00:05:45.290659261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:45.293884 containerd[2007]: time="2026-01-23T00:05:45.293763556Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 00:05:45.296823 containerd[2007]: time="2026-01-23T00:05:45.296755836Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:45.307088 containerd[2007]: time="2026-01-23T00:05:45.306998923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:45.309600 containerd[2007]: time="2026-01-23T00:05:45.309217980Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.422531712s" Jan 23 00:05:45.309600 containerd[2007]: time="2026-01-23T00:05:45.309281191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 00:05:45.310068 containerd[2007]: time="2026-01-23T00:05:45.310032047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 00:05:46.566966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765680886.mount: Deactivated successfully. Jan 23 00:05:47.112476 containerd[2007]: time="2026-01-23T00:05:47.112419470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:47.113460 containerd[2007]: time="2026-01-23T00:05:47.113411118Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 00:05:47.114723 containerd[2007]: time="2026-01-23T00:05:47.114620987Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:47.118134 containerd[2007]: time="2026-01-23T00:05:47.118057332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:47.119609 containerd[2007]: time="2026-01-23T00:05:47.119357882Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.809067803s" Jan 23 00:05:47.119609 containerd[2007]: time="2026-01-23T00:05:47.119438178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 00:05:47.120052 containerd[2007]: time="2026-01-23T00:05:47.120011633Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 00:05:47.611145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597626211.mount: Deactivated successfully. Jan 23 00:05:48.752151 containerd[2007]: time="2026-01-23T00:05:48.751463114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:48.754680 containerd[2007]: time="2026-01-23T00:05:48.754632614Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 00:05:48.756340 containerd[2007]: time="2026-01-23T00:05:48.756276392Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:48.760653 containerd[2007]: time="2026-01-23T00:05:48.760601001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:48.763230 containerd[2007]: time="2026-01-23T00:05:48.763178221Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.642943276s" Jan 23 00:05:48.763520 containerd[2007]: time="2026-01-23T00:05:48.763385504Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 00:05:48.764400 containerd[2007]: time="2026-01-23T00:05:48.764344592Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 00:05:49.209496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754385861.mount: Deactivated successfully. Jan 23 00:05:49.216299 containerd[2007]: time="2026-01-23T00:05:49.216229809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:05:49.217588 containerd[2007]: time="2026-01-23T00:05:49.217532820Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 00:05:49.218971 containerd[2007]: time="2026-01-23T00:05:49.218739159Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:05:49.222618 containerd[2007]: time="2026-01-23T00:05:49.222516138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:05:49.224701 containerd[2007]: time="2026-01-23T00:05:49.223843546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 459.441301ms" Jan 23 00:05:49.224701 containerd[2007]: time="2026-01-23T00:05:49.223899385Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 00:05:49.225046 containerd[2007]: time="2026-01-23T00:05:49.224985292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 00:05:49.744798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443210390.mount: Deactivated successfully. Jan 23 00:05:52.030916 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:05:52.035052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:05:52.230187 containerd[2007]: time="2026-01-23T00:05:52.230126169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:52.233812 containerd[2007]: time="2026-01-23T00:05:52.233743276Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 00:05:52.237137 containerd[2007]: time="2026-01-23T00:05:52.236667386Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:52.246431 containerd[2007]: time="2026-01-23T00:05:52.246377874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:05:52.250730 containerd[2007]: time="2026-01-23T00:05:52.250639680Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.025583564s" Jan 23 00:05:52.251033 containerd[2007]: time="2026-01-23T00:05:52.250992957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 00:05:52.614750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:05:52.628917 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:05:52.709143 kubelet[2803]: E0123 00:05:52.709055 2803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:05:52.713603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:05:52.713909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:05:52.714813 systemd[1]: kubelet.service: Consumed 316ms CPU time, 105.3M memory peak. Jan 23 00:05:58.242597 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 00:06:00.548805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:00.549222 systemd[1]: kubelet.service: Consumed 316ms CPU time, 105.3M memory peak. Jan 23 00:06:00.553012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:00.604671 systemd[1]: Reload requested from client PID 2833 ('systemctl') (unit session-7.scope)... Jan 23 00:06:00.604905 systemd[1]: Reloading... Jan 23 00:06:00.831165 zram_generator::config[2883]: No configuration found. Jan 23 00:06:01.300752 systemd[1]: Reloading finished in 694 ms. Jan 23 00:06:01.394280 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:06:01.394646 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:06:01.395410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:01.395595 systemd[1]: kubelet.service: Consumed 230ms CPU time, 95M memory peak. Jan 23 00:06:01.400186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:01.719918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:01.739954 (kubelet)[2941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:06:01.814868 kubelet[2941]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:06:01.817386 kubelet[2941]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:06:01.817386 kubelet[2941]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:06:01.817386 kubelet[2941]: I0123 00:06:01.815304 2941 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:06:02.981671 kubelet[2941]: I0123 00:06:02.981619 2941 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 00:06:02.982279 kubelet[2941]: I0123 00:06:02.982257 2941 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:06:02.982822 kubelet[2941]: I0123 00:06:02.982801 2941 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 00:06:03.026824 kubelet[2941]: E0123 00:06:03.026673 2941 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.104:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:06:03.029070 kubelet[2941]: I0123 00:06:03.029017 2941 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:06:03.042781 kubelet[2941]: I0123 00:06:03.042741 2941 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:06:03.049449 kubelet[2941]: I0123 00:06:03.049220 2941 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 00:06:03.050801 kubelet[2941]: I0123 00:06:03.050719 2941 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:06:03.051357 kubelet[2941]: I0123 00:06:03.050799 2941 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-104","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:06:03.051559 kubelet[2941]: I0123 00:06:03.051513 2941 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:06:03.051559 kubelet[2941]: I0123 00:06:03.051537 2941 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 00:06:03.051926 kubelet[2941]: I0123 00:06:03.051888 2941 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:06:03.059257 kubelet[2941]: I0123 00:06:03.059215 2941 kubelet.go:446] "Attempting to sync node with API server" Jan 23 00:06:03.059444 kubelet[2941]: I0123 00:06:03.059425 2941 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:06:03.059567 kubelet[2941]: I0123 00:06:03.059550 2941 kubelet.go:352] "Adding apiserver pod source" Jan 23 00:06:03.059683 kubelet[2941]: I0123 00:06:03.059662 2941 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:06:03.066272 kubelet[2941]: W0123 00:06:03.066071 2941 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-104&limit=500&resourceVersion=0": dial tcp 172.31.17.104:6443: connect: connection refused Jan 23 00:06:03.066420 kubelet[2941]: E0123 00:06:03.066286 2941 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-104&limit=500&resourceVersion=0\": dial tcp 172.31.17.104:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:06:03.067321 kubelet[2941]: W0123 00:06:03.067073 2941 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.104:6443: connect: connection refused Jan 23 00:06:03.067321 kubelet[2941]: E0123 00:06:03.067184 2941 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.104:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:06:03.067514 kubelet[2941]: I0123 00:06:03.067455 2941 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:06:03.068507 kubelet[2941]: I0123 00:06:03.068459 2941 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 00:06:03.068718 kubelet[2941]: W0123 00:06:03.068685 2941 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:06:03.071135 kubelet[2941]: I0123 00:06:03.070923 2941 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 00:06:03.071135 kubelet[2941]: I0123 00:06:03.070990 2941 server.go:1287] "Started kubelet" Jan 23 00:06:03.074355 kubelet[2941]: I0123 00:06:03.074230 2941 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:06:03.085374 kubelet[2941]: I0123 00:06:03.084254 2941 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:06:03.085374 kubelet[2941]: I0123 00:06:03.084989 2941 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 00:06:03.085600 kubelet[2941]: E0123 00:06:03.085540 2941 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-104\" not found" Jan 23 00:06:03.086466 kubelet[2941]: I0123 00:06:03.086434 2941 server.go:479] "Adding debug handlers to kubelet server" Jan 23 00:06:03.089496 kubelet[2941]: I0123 00:06:03.089383 2941 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:06:03.089983 kubelet[2941]: I0123 00:06:03.089957 2941 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:06:03.090249 kubelet[2941]: I0123 00:06:03.090210 2941 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 00:06:03.090330 kubelet[2941]: I0123 00:06:03.090299 2941 reconciler.go:26] "Reconciler: start to sync state" Jan 23 00:06:03.095922 kubelet[2941]: I0123 00:06:03.095865 2941 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:06:03.102140 kubelet[2941]: E0123 00:06:03.101280 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-104?timeout=10s\": dial tcp 172.31.17.104:6443: connect: connection refused" interval="200ms" Jan 23 00:06:03.105427 kubelet[2941]: I0123 00:06:03.102605 2941 factory.go:221] Registration of the systemd container factory successfully Jan 23 00:06:03.105427 kubelet[2941]: I0123 00:06:03.102764 2941 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:06:03.105427 kubelet[2941]: I0123 00:06:03.104432 2941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 00:06:03.106644 kubelet[2941]: I0123 00:06:03.106584 2941 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 00:06:03.106644 kubelet[2941]: I0123 00:06:03.106634 2941 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 00:06:03.106815 kubelet[2941]: I0123 00:06:03.106667 2941 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:06:03.106815 kubelet[2941]: I0123 00:06:03.106681 2941 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 00:06:03.106815 kubelet[2941]: E0123 00:06:03.106749 2941 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:06:03.109761 kubelet[2941]: E0123 00:06:03.103747 2941 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.104:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.104:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-104.188d336af37b2f52 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-104,UID:ip-172-31-17-104,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-104,},FirstTimestamp:2026-01-23 00:06:03.070959442 +0000 UTC m=+1.324358863,LastTimestamp:2026-01-23 00:06:03.070959442 +0000 UTC m=+1.324358863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-104,}" Jan 23 00:06:03.110347 kubelet[2941]: W0123 00:06:03.110283 2941 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.104:6443: connect: connection refused Jan 23 00:06:03.111916 kubelet[2941]: E0123 00:06:03.110507 2941 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.104:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:06:03.114012 kubelet[2941]: E0123 00:06:03.113973 2941 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:06:03.114548 kubelet[2941]: I0123 00:06:03.114521 2941 factory.go:221] Registration of the containerd container factory successfully Jan 23 00:06:03.122542 kubelet[2941]: W0123 00:06:03.122456 2941 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.104:6443: connect: connection refused Jan 23 00:06:03.122688 kubelet[2941]: E0123 00:06:03.122551 2941 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.104:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:06:03.151720 kubelet[2941]: I0123 00:06:03.151679 2941 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:06:03.151720 kubelet[2941]: I0123 00:06:03.151710 2941 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:06:03.151910 kubelet[2941]: I0123 00:06:03.151742 2941 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:06:03.154324 kubelet[2941]: I0123 00:06:03.154273 2941 policy_none.go:49] "None policy: Start" Jan 23 00:06:03.154324 kubelet[2941]: I0123 00:06:03.154313 2941 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 00:06:03.154485 kubelet[2941]: I0123 00:06:03.154336 2941 state_mem.go:35] "Initializing new in-memory state store" Jan 23 00:06:03.169283 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:06:03.186174 kubelet[2941]: E0123 00:06:03.186135 2941 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-104\" not found" Jan 23 00:06:03.186884 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:06:03.197431 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:06:03.207749 kubelet[2941]: E0123 00:06:03.207712 2941 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 00:06:03.208597 kubelet[2941]: I0123 00:06:03.208551 2941 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 00:06:03.209230 kubelet[2941]: I0123 00:06:03.208879 2941 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:06:03.209230 kubelet[2941]: I0123 00:06:03.208911 2941 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:06:03.210372 kubelet[2941]: I0123 00:06:03.209926 2941 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:06:03.212615 kubelet[2941]: E0123 00:06:03.212163 2941 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:06:03.212615 kubelet[2941]: E0123 00:06:03.212268 2941 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-104\" not found" Jan 23 00:06:03.302188 kubelet[2941]: E0123 00:06:03.301840 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-104?timeout=10s\": dial tcp 172.31.17.104:6443: connect: connection refused" interval="400ms" Jan 23 00:06:03.312806 kubelet[2941]: I0123 00:06:03.312052 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-104" Jan 23 00:06:03.312806 kubelet[2941]: E0123 00:06:03.312720 2941 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.104:6443/api/v1/nodes\": dial tcp 172.31.17.104:6443: connect: connection refused" node="ip-172-31-17-104" Jan 23 00:06:03.433667 systemd[1]: Created slice kubepods-burstable-podc11e9eb2c9821fe9d6ab77808b25a522.slice - libcontainer container kubepods-burstable-podc11e9eb2c9821fe9d6ab77808b25a522.slice. Jan 23 00:06:03.446855 kubelet[2941]: E0123 00:06:03.446793 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:03.450419 systemd[1]: Created slice kubepods-burstable-pod8caac315fcbbc38f350faa8ec1afa1d1.slice - libcontainer container kubepods-burstable-pod8caac315fcbbc38f350faa8ec1afa1d1.slice. Jan 23 00:06:03.455144 kubelet[2941]: E0123 00:06:03.454996 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:03.461509 systemd[1]: Created slice kubepods-burstable-pod42cc1c70daa2e83181c8b622aa20912b.slice - libcontainer container kubepods-burstable-pod42cc1c70daa2e83181c8b622aa20912b.slice. Jan 23 00:06:03.466157 kubelet[2941]: E0123 00:06:03.465826 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:03.491864 kubelet[2941]: I0123 00:06:03.491826 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c11e9eb2c9821fe9d6ab77808b25a522-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-104\" (UID: \"c11e9eb2c9821fe9d6ab77808b25a522\") " pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:03.492205 kubelet[2941]: I0123 00:06:03.492154 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:03.492391 kubelet[2941]: I0123 00:06:03.492339 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:03.492779 kubelet[2941]: I0123 00:06:03.492510 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:03.492779 kubelet[2941]: I0123 00:06:03.492558 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:03.492779 kubelet[2941]: I0123 00:06:03.492598 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42cc1c70daa2e83181c8b622aa20912b-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-104\" (UID: \"42cc1c70daa2e83181c8b622aa20912b\") " pod="kube-system/kube-scheduler-ip-172-31-17-104" Jan 23 00:06:03.492779 kubelet[2941]: I0123 00:06:03.492633 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c11e9eb2c9821fe9d6ab77808b25a522-ca-certs\") pod \"kube-apiserver-ip-172-31-17-104\" (UID: \"c11e9eb2c9821fe9d6ab77808b25a522\") " pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:03.492779 kubelet[2941]: I0123 00:06:03.492671 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c11e9eb2c9821fe9d6ab77808b25a522-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-104\" (UID: \"c11e9eb2c9821fe9d6ab77808b25a522\") " pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:03.493035 kubelet[2941]: I0123 00:06:03.492703 2941 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:03.515825 kubelet[2941]: I0123 00:06:03.515756 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-104" Jan 23 00:06:03.516687 kubelet[2941]: E0123 00:06:03.516641 2941 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.104:6443/api/v1/nodes\": dial tcp 172.31.17.104:6443: connect: connection refused" node="ip-172-31-17-104" Jan 23 00:06:03.703848 kubelet[2941]: E0123 00:06:03.703688 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-104?timeout=10s\": dial tcp 172.31.17.104:6443: connect: connection refused" interval="800ms" Jan 23 00:06:03.749871 containerd[2007]: time="2026-01-23T00:06:03.748749842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-104,Uid:c11e9eb2c9821fe9d6ab77808b25a522,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:03.756879 containerd[2007]: time="2026-01-23T00:06:03.756830906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-104,Uid:8caac315fcbbc38f350faa8ec1afa1d1,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:03.767951 containerd[2007]: time="2026-01-23T00:06:03.767832218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-104,Uid:42cc1c70daa2e83181c8b622aa20912b,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:03.805216 containerd[2007]: time="2026-01-23T00:06:03.804221930Z" level=info msg="connecting to shim f0acebba23c141e46f57869c50369c3edf4f7413d2fc5f8b541bb5860221da40" address="unix:///run/containerd/s/d5bded11303e2b9cfd2183d5102f312bedbf166f8865b535c54555ed2a564617" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:03.820469 containerd[2007]: time="2026-01-23T00:06:03.820390022Z" level=info msg="connecting to shim 2925586518549057e92f0b0a45f83f48cf57e40212f73ed8f1ab0a15d0f9a05d" address="unix:///run/containerd/s/4120255a48e20a677965de9ab0106e606008e01072a0a0c566fe911050f9db4a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:03.865488 containerd[2007]: time="2026-01-23T00:06:03.865424618Z" level=info msg="connecting to shim 47ae18c67529f438c25888992542c009e10e1bdeed1bc5787bf92198b1413226" address="unix:///run/containerd/s/dd5172d15ff5020ad7f5a7502413b6cf0e56fad09c4ebe04a8dd4c0e61b9ed87" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:03.884582 systemd[1]: Started cri-containerd-f0acebba23c141e46f57869c50369c3edf4f7413d2fc5f8b541bb5860221da40.scope - libcontainer container f0acebba23c141e46f57869c50369c3edf4f7413d2fc5f8b541bb5860221da40. Jan 23 00:06:03.916765 systemd[1]: Started cri-containerd-2925586518549057e92f0b0a45f83f48cf57e40212f73ed8f1ab0a15d0f9a05d.scope - libcontainer container 2925586518549057e92f0b0a45f83f48cf57e40212f73ed8f1ab0a15d0f9a05d. Jan 23 00:06:03.921240 kubelet[2941]: I0123 00:06:03.921190 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-104" Jan 23 00:06:03.921881 kubelet[2941]: E0123 00:06:03.921824 2941 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.104:6443/api/v1/nodes\": dial tcp 172.31.17.104:6443: connect: connection refused" node="ip-172-31-17-104" Jan 23 00:06:03.948703 systemd[1]: Started cri-containerd-47ae18c67529f438c25888992542c009e10e1bdeed1bc5787bf92198b1413226.scope - libcontainer container 47ae18c67529f438c25888992542c009e10e1bdeed1bc5787bf92198b1413226. Jan 23 00:06:03.961846 kubelet[2941]: W0123 00:06:03.961571 2941 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-104&limit=500&resourceVersion=0": dial tcp 172.31.17.104:6443: connect: connection refused Jan 23 00:06:03.961846 kubelet[2941]: E0123 00:06:03.961725 2941 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-104&limit=500&resourceVersion=0\": dial tcp 172.31.17.104:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:06:04.052011 kubelet[2941]: W0123 00:06:04.051855 2941 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.104:6443: connect: connection refused Jan 23 00:06:04.052011 kubelet[2941]: E0123 00:06:04.051963 2941 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.104:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:06:04.054493 containerd[2007]: time="2026-01-23T00:06:04.054402059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-104,Uid:8caac315fcbbc38f350faa8ec1afa1d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0acebba23c141e46f57869c50369c3edf4f7413d2fc5f8b541bb5860221da40\"" Jan 23 00:06:04.064633 containerd[2007]: time="2026-01-23T00:06:04.064267859Z" level=info msg="CreateContainer within sandbox \"f0acebba23c141e46f57869c50369c3edf4f7413d2fc5f8b541bb5860221da40\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:06:04.081350 containerd[2007]: time="2026-01-23T00:06:04.081275219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-104,Uid:c11e9eb2c9821fe9d6ab77808b25a522,Namespace:kube-system,Attempt:0,} returns sandbox id \"2925586518549057e92f0b0a45f83f48cf57e40212f73ed8f1ab0a15d0f9a05d\"" Jan 23 00:06:04.086779 containerd[2007]: time="2026-01-23T00:06:04.086372015Z" level=info msg="CreateContainer within sandbox \"2925586518549057e92f0b0a45f83f48cf57e40212f73ed8f1ab0a15d0f9a05d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:06:04.086779 containerd[2007]: time="2026-01-23T00:06:04.086739239Z" level=info msg="Container 0cb2c7ad50ed48ec1221bdadf5f30fbf8f669713cb2d673a05b3204a6c7e8c2f: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:04.103134 containerd[2007]: time="2026-01-23T00:06:04.102758543Z" level=info msg="CreateContainer within sandbox \"f0acebba23c141e46f57869c50369c3edf4f7413d2fc5f8b541bb5860221da40\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0cb2c7ad50ed48ec1221bdadf5f30fbf8f669713cb2d673a05b3204a6c7e8c2f\"" Jan 23 00:06:04.104099 containerd[2007]: time="2026-01-23T00:06:04.104043419Z" level=info msg="Container 345cdb619110e2b1e05b1e70e8017ec9145f6b7215b307ab4f025f2dd31d7f6e: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:04.105309 containerd[2007]: time="2026-01-23T00:06:04.105257471Z" level=info msg="StartContainer for \"0cb2c7ad50ed48ec1221bdadf5f30fbf8f669713cb2d673a05b3204a6c7e8c2f\"" Jan 23 00:06:04.108480 containerd[2007]: time="2026-01-23T00:06:04.108397511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-104,Uid:42cc1c70daa2e83181c8b622aa20912b,Namespace:kube-system,Attempt:0,} returns sandbox id \"47ae18c67529f438c25888992542c009e10e1bdeed1bc5787bf92198b1413226\"" Jan 23 00:06:04.113085 containerd[2007]: time="2026-01-23T00:06:04.112969379Z" level=info msg="connecting to shim 0cb2c7ad50ed48ec1221bdadf5f30fbf8f669713cb2d673a05b3204a6c7e8c2f" address="unix:///run/containerd/s/d5bded11303e2b9cfd2183d5102f312bedbf166f8865b535c54555ed2a564617" protocol=ttrpc version=3 Jan 23 00:06:04.125257 containerd[2007]: time="2026-01-23T00:06:04.125183531Z" level=info msg="CreateContainer within sandbox \"47ae18c67529f438c25888992542c009e10e1bdeed1bc5787bf92198b1413226\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:06:04.136157 containerd[2007]: time="2026-01-23T00:06:04.136053600Z" level=info msg="CreateContainer within sandbox \"2925586518549057e92f0b0a45f83f48cf57e40212f73ed8f1ab0a15d0f9a05d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"345cdb619110e2b1e05b1e70e8017ec9145f6b7215b307ab4f025f2dd31d7f6e\"" Jan 23 00:06:04.138221 containerd[2007]: time="2026-01-23T00:06:04.138135000Z" level=info msg="StartContainer for \"345cdb619110e2b1e05b1e70e8017ec9145f6b7215b307ab4f025f2dd31d7f6e\"" Jan 23 00:06:04.140590 containerd[2007]: time="2026-01-23T00:06:04.140486232Z" level=info msg="connecting to shim 345cdb619110e2b1e05b1e70e8017ec9145f6b7215b307ab4f025f2dd31d7f6e" address="unix:///run/containerd/s/4120255a48e20a677965de9ab0106e606008e01072a0a0c566fe911050f9db4a" protocol=ttrpc version=3 Jan 23 00:06:04.145258 containerd[2007]: time="2026-01-23T00:06:04.145183056Z" level=info msg="Container e7b36c70167d30c2cec5886b433043857c4f0dc62e1b44859e3d410d53eac9e2: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:04.159584 containerd[2007]: time="2026-01-23T00:06:04.159510648Z" level=info msg="CreateContainer within sandbox \"47ae18c67529f438c25888992542c009e10e1bdeed1bc5787bf92198b1413226\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e7b36c70167d30c2cec5886b433043857c4f0dc62e1b44859e3d410d53eac9e2\"" Jan 23 00:06:04.160315 containerd[2007]: time="2026-01-23T00:06:04.160233828Z" level=info msg="StartContainer for \"e7b36c70167d30c2cec5886b433043857c4f0dc62e1b44859e3d410d53eac9e2\"" Jan 23 00:06:04.164023 kubelet[2941]: W0123 00:06:04.163907 2941 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.104:6443: connect: connection refused Jan 23 00:06:04.164383 kubelet[2941]: E0123 00:06:04.164034 2941 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.104:6443: connect: connection refused" logger="UnhandledError" Jan 23 00:06:04.166918 containerd[2007]: time="2026-01-23T00:06:04.166845036Z" level=info msg="connecting to shim e7b36c70167d30c2cec5886b433043857c4f0dc62e1b44859e3d410d53eac9e2" address="unix:///run/containerd/s/dd5172d15ff5020ad7f5a7502413b6cf0e56fad09c4ebe04a8dd4c0e61b9ed87" protocol=ttrpc version=3 Jan 23 00:06:04.179739 systemd[1]: Started cri-containerd-0cb2c7ad50ed48ec1221bdadf5f30fbf8f669713cb2d673a05b3204a6c7e8c2f.scope - libcontainer container 0cb2c7ad50ed48ec1221bdadf5f30fbf8f669713cb2d673a05b3204a6c7e8c2f. Jan 23 00:06:04.208528 systemd[1]: Started cri-containerd-345cdb619110e2b1e05b1e70e8017ec9145f6b7215b307ab4f025f2dd31d7f6e.scope - libcontainer container 345cdb619110e2b1e05b1e70e8017ec9145f6b7215b307ab4f025f2dd31d7f6e. Jan 23 00:06:04.233471 systemd[1]: Started cri-containerd-e7b36c70167d30c2cec5886b433043857c4f0dc62e1b44859e3d410d53eac9e2.scope - libcontainer container e7b36c70167d30c2cec5886b433043857c4f0dc62e1b44859e3d410d53eac9e2. Jan 23 00:06:04.366816 containerd[2007]: time="2026-01-23T00:06:04.366733045Z" level=info msg="StartContainer for \"345cdb619110e2b1e05b1e70e8017ec9145f6b7215b307ab4f025f2dd31d7f6e\" returns successfully" Jan 23 00:06:04.390858 containerd[2007]: time="2026-01-23T00:06:04.389939005Z" level=info msg="StartContainer for \"e7b36c70167d30c2cec5886b433043857c4f0dc62e1b44859e3d410d53eac9e2\" returns successfully" Jan 23 00:06:04.397808 containerd[2007]: time="2026-01-23T00:06:04.397751665Z" level=info msg="StartContainer for \"0cb2c7ad50ed48ec1221bdadf5f30fbf8f669713cb2d673a05b3204a6c7e8c2f\" returns successfully" Jan 23 00:06:04.505530 kubelet[2941]: E0123 00:06:04.505460 2941 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-104?timeout=10s\": dial tcp 172.31.17.104:6443: connect: connection refused" interval="1.6s" Jan 23 00:06:04.725067 kubelet[2941]: I0123 00:06:04.725016 2941 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-104" Jan 23 00:06:05.154773 kubelet[2941]: E0123 00:06:05.154720 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:05.157516 kubelet[2941]: E0123 00:06:05.157470 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:05.163827 kubelet[2941]: E0123 00:06:05.163780 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:06.167954 kubelet[2941]: E0123 00:06:06.167903 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:06.169354 kubelet[2941]: E0123 00:06:06.169311 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:06.171334 kubelet[2941]: E0123 00:06:06.171291 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:07.170745 kubelet[2941]: E0123 00:06:07.170698 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:07.171451 kubelet[2941]: E0123 00:06:07.171279 2941 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:08.653929 kubelet[2941]: E0123 00:06:08.653863 2941 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-104\" not found" node="ip-172-31-17-104" Jan 23 00:06:08.720493 kubelet[2941]: I0123 00:06:08.720412 2941 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-104" Jan 23 00:06:08.720493 kubelet[2941]: E0123 00:06:08.720486 2941 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-104\": node \"ip-172-31-17-104\" not found" Jan 23 00:06:08.786642 kubelet[2941]: I0123 00:06:08.786517 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:08.802800 kubelet[2941]: E0123 00:06:08.802278 2941 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-104\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:08.802800 kubelet[2941]: I0123 00:06:08.802333 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-104" Jan 23 00:06:08.807884 kubelet[2941]: E0123 00:06:08.807820 2941 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-104\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-104" Jan 23 00:06:08.807884 kubelet[2941]: I0123 00:06:08.807877 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:08.815814 kubelet[2941]: E0123 00:06:08.815746 2941 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-104\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:09.072614 kubelet[2941]: I0123 00:06:09.071305 2941 apiserver.go:52] "Watching apiserver" Jan 23 00:06:09.090568 kubelet[2941]: I0123 00:06:09.090370 2941 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 00:06:09.647505 kubelet[2941]: I0123 00:06:09.647439 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:10.497870 kubelet[2941]: I0123 00:06:10.497742 2941 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:10.725080 systemd[1]: Reload requested from client PID 3211 ('systemctl') (unit session-7.scope)... Jan 23 00:06:10.725144 systemd[1]: Reloading... Jan 23 00:06:10.951147 zram_generator::config[3258]: No configuration found. Jan 23 00:06:11.434298 systemd[1]: Reloading finished in 708 ms. Jan 23 00:06:11.479370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:11.493928 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:06:11.495221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:11.495319 systemd[1]: kubelet.service: Consumed 2.070s CPU time, 128.6M memory peak. Jan 23 00:06:11.499238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:11.840627 update_engine[1974]: I20260123 00:06:11.840540 1974 update_attempter.cc:509] Updating boot flags... Jan 23 00:06:11.859449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:11.885806 (kubelet)[3320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:06:12.051756 sudo[3346]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 00:06:12.053166 sudo[3346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 00:06:12.054858 kubelet[3320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:06:12.054858 kubelet[3320]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:06:12.055399 kubelet[3320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:06:12.055453 kubelet[3320]: I0123 00:06:12.055411 3320 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:06:12.104446 kubelet[3320]: I0123 00:06:12.103259 3320 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 00:06:12.104446 kubelet[3320]: I0123 00:06:12.103310 3320 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:06:12.104446 kubelet[3320]: I0123 00:06:12.104300 3320 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 00:06:12.120919 kubelet[3320]: I0123 00:06:12.120786 3320 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 00:06:12.130135 kubelet[3320]: I0123 00:06:12.130069 3320 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:06:12.150646 kubelet[3320]: I0123 00:06:12.150609 3320 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:06:12.170743 kubelet[3320]: I0123 00:06:12.170513 3320 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 00:06:12.173807 kubelet[3320]: I0123 00:06:12.173744 3320 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:06:12.174480 kubelet[3320]: I0123 00:06:12.173984 3320 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-104","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:06:12.176935 kubelet[3320]: I0123 00:06:12.176364 3320 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:06:12.176935 kubelet[3320]: I0123 00:06:12.176409 3320 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 00:06:12.176935 kubelet[3320]: I0123 00:06:12.176500 3320 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:06:12.177528 kubelet[3320]: I0123 00:06:12.177225 3320 kubelet.go:446] "Attempting to sync node with API server" Jan 23 00:06:12.177528 kubelet[3320]: I0123 00:06:12.177260 3320 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:06:12.179141 kubelet[3320]: I0123 00:06:12.177301 3320 kubelet.go:352] "Adding apiserver pod source" Jan 23 00:06:12.179141 kubelet[3320]: I0123 00:06:12.178365 3320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:06:12.184886 kubelet[3320]: I0123 00:06:12.184678 3320 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:06:12.187963 kubelet[3320]: I0123 00:06:12.187921 3320 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 00:06:12.190667 kubelet[3320]: I0123 00:06:12.189915 3320 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 00:06:12.191813 kubelet[3320]: I0123 00:06:12.191743 3320 server.go:1287] "Started kubelet" Jan 23 00:06:12.217271 kubelet[3320]: I0123 00:06:12.217202 3320 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:06:12.228549 kubelet[3320]: I0123 00:06:12.226661 3320 server.go:479] "Adding debug handlers to kubelet server" Jan 23 00:06:12.249731 kubelet[3320]: I0123 00:06:12.249374 3320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:06:12.250994 kubelet[3320]: I0123 00:06:12.250860 3320 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:06:12.271396 kubelet[3320]: I0123 00:06:12.270857 3320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:06:12.287655 kubelet[3320]: I0123 00:06:12.287611 3320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:06:12.297923 kubelet[3320]: I0123 00:06:12.297840 3320 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 00:06:12.303200 kubelet[3320]: E0123 00:06:12.301690 3320 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-104\" not found" Jan 23 00:06:12.312813 kubelet[3320]: I0123 00:06:12.312013 3320 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 00:06:12.321638 kubelet[3320]: I0123 00:06:12.313417 3320 reconciler.go:26] "Reconciler: start to sync state" Jan 23 00:06:12.343061 kubelet[3320]: I0123 00:06:12.342999 3320 factory.go:221] Registration of the systemd container factory successfully Jan 23 00:06:12.346286 kubelet[3320]: I0123 00:06:12.345989 3320 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:06:12.419827 kubelet[3320]: I0123 00:06:12.419645 3320 factory.go:221] Registration of the containerd container factory successfully Jan 23 00:06:12.456178 kubelet[3320]: I0123 00:06:12.453288 3320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 00:06:12.485611 kubelet[3320]: I0123 00:06:12.483674 3320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 00:06:12.485611 kubelet[3320]: I0123 00:06:12.483723 3320 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 00:06:12.485611 kubelet[3320]: I0123 00:06:12.483755 3320 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:06:12.485611 kubelet[3320]: I0123 00:06:12.483768 3320 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 00:06:12.485611 kubelet[3320]: E0123 00:06:12.483839 3320 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:06:12.489098 kubelet[3320]: E0123 00:06:12.489056 3320 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:06:12.590651 kubelet[3320]: E0123 00:06:12.588318 3320 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 00:06:12.792928 kubelet[3320]: E0123 00:06:12.792169 3320 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 00:06:12.805168 kubelet[3320]: I0123 00:06:12.804842 3320 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:06:12.805168 kubelet[3320]: I0123 00:06:12.804882 3320 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:06:12.805168 kubelet[3320]: I0123 00:06:12.804918 3320 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:06:12.809718 kubelet[3320]: I0123 00:06:12.806541 3320 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:06:12.809718 kubelet[3320]: I0123 00:06:12.806582 3320 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:06:12.809718 kubelet[3320]: I0123 00:06:12.806620 3320 policy_none.go:49] "None policy: Start" Jan 23 00:06:12.809718 kubelet[3320]: I0123 00:06:12.806639 3320 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 00:06:12.809718 kubelet[3320]: I0123 00:06:12.806663 3320 state_mem.go:35] "Initializing new in-memory state store" Jan 23 00:06:12.811629 kubelet[3320]: I0123 00:06:12.807472 3320 state_mem.go:75] "Updated machine memory state" Jan 23 00:06:12.912495 kubelet[3320]: I0123 00:06:12.911085 3320 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 00:06:12.914901 kubelet[3320]: I0123 00:06:12.913087 3320 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:06:12.916372 kubelet[3320]: I0123 00:06:12.914699 3320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:06:12.920318 kubelet[3320]: I0123 00:06:12.919368 3320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:06:12.946878 kubelet[3320]: E0123 00:06:12.946814 3320 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:06:13.082806 kubelet[3320]: I0123 00:06:13.082220 3320 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-104" Jan 23 00:06:13.122784 kubelet[3320]: I0123 00:06:13.122162 3320 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-104" Jan 23 00:06:13.122784 kubelet[3320]: I0123 00:06:13.122289 3320 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-104" Jan 23 00:06:13.179877 kubelet[3320]: I0123 00:06:13.179584 3320 apiserver.go:52] "Watching apiserver" Jan 23 00:06:13.193554 kubelet[3320]: I0123 00:06:13.192982 3320 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-104" Jan 23 00:06:13.223922 kubelet[3320]: I0123 00:06:13.222045 3320 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 00:06:13.259443 kubelet[3320]: I0123 00:06:13.259355 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:13.259443 kubelet[3320]: I0123 00:06:13.259431 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:13.262237 kubelet[3320]: I0123 00:06:13.259478 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42cc1c70daa2e83181c8b622aa20912b-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-104\" (UID: \"42cc1c70daa2e83181c8b622aa20912b\") " pod="kube-system/kube-scheduler-ip-172-31-17-104" Jan 23 00:06:13.262237 kubelet[3320]: I0123 00:06:13.259513 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c11e9eb2c9821fe9d6ab77808b25a522-ca-certs\") pod \"kube-apiserver-ip-172-31-17-104\" (UID: \"c11e9eb2c9821fe9d6ab77808b25a522\") " pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:13.262237 kubelet[3320]: I0123 00:06:13.259549 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c11e9eb2c9821fe9d6ab77808b25a522-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-104\" (UID: \"c11e9eb2c9821fe9d6ab77808b25a522\") " pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:13.262237 kubelet[3320]: I0123 00:06:13.259587 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:13.262237 kubelet[3320]: I0123 00:06:13.259646 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:13.262576 kubelet[3320]: I0123 00:06:13.259689 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c11e9eb2c9821fe9d6ab77808b25a522-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-104\" (UID: \"c11e9eb2c9821fe9d6ab77808b25a522\") " pod="kube-system/kube-apiserver-ip-172-31-17-104" Jan 23 00:06:13.262576 kubelet[3320]: I0123 00:06:13.259731 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8caac315fcbbc38f350faa8ec1afa1d1-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-104\" (UID: \"8caac315fcbbc38f350faa8ec1afa1d1\") " pod="kube-system/kube-controller-manager-ip-172-31-17-104" Jan 23 00:06:13.321352 sudo[3346]: pam_unix(sudo:session): session closed for user root Jan 23 00:06:13.450832 kubelet[3320]: I0123 00:06:13.450657 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-104" podStartSLOduration=4.450634954 podStartE2EDuration="4.450634954s" podCreationTimestamp="2026-01-23 00:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:06:13.408595942 +0000 UTC m=+1.508734881" watchObservedRunningTime="2026-01-23 00:06:13.450634954 +0000 UTC m=+1.550773893" Jan 23 00:06:13.505133 kubelet[3320]: I0123 00:06:13.504615 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-104" podStartSLOduration=3.5045906860000002 podStartE2EDuration="3.504590686s" podCreationTimestamp="2026-01-23 00:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:06:13.454214734 +0000 UTC m=+1.554353673" watchObservedRunningTime="2026-01-23 00:06:13.504590686 +0000 UTC m=+1.604729613" Jan 23 00:06:13.511325 kubelet[3320]: I0123 00:06:13.509252 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-104" podStartSLOduration=0.509229022 podStartE2EDuration="509.229022ms" podCreationTimestamp="2026-01-23 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:06:13.499433782 +0000 UTC m=+1.599572745" watchObservedRunningTime="2026-01-23 00:06:13.509229022 +0000 UTC m=+1.609368033" Jan 23 00:06:15.860028 sudo[2360]: pam_unix(sudo:session): session closed for user root Jan 23 00:06:15.938308 sshd[2359]: Connection closed by 4.153.228.146 port 35782 Jan 23 00:06:15.939163 sshd-session[2356]: pam_unix(sshd:session): session closed for user core Jan 23 00:06:15.946493 systemd[1]: sshd@6-172.31.17.104:22-4.153.228.146:35782.service: Deactivated successfully. Jan 23 00:06:15.951482 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:06:15.953064 systemd[1]: session-7.scope: Consumed 11.563s CPU time, 262.3M memory peak. Jan 23 00:06:15.955411 systemd-logind[1973]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:06:15.958709 systemd-logind[1973]: Removed session 7. Jan 23 00:06:17.442057 kubelet[3320]: I0123 00:06:17.440771 3320 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:06:17.442677 containerd[2007]: time="2026-01-23T00:06:17.441306158Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:06:17.443609 kubelet[3320]: I0123 00:06:17.443437 3320 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:06:18.349633 systemd[1]: Created slice kubepods-besteffort-pod43e98193_c401_4703_8ddc_8256ecfa58a1.slice - libcontainer container kubepods-besteffort-pod43e98193_c401_4703_8ddc_8256ecfa58a1.slice. Jan 23 00:06:18.359164 kubelet[3320]: I0123 00:06:18.358634 3320 status_manager.go:890] "Failed to get status for pod" podUID="43e98193-c401-4703-8ddc-8256ecfa58a1" pod="kube-system/kube-proxy-r8v5t" err="pods \"kube-proxy-r8v5t\" is forbidden: User \"system:node:ip-172-31-17-104\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-104' and this object" Jan 23 00:06:18.362870 kubelet[3320]: W0123 00:06:18.362084 3320 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-104" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-104' and this object Jan 23 00:06:18.364152 kubelet[3320]: E0123 00:06:18.364065 3320 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-17-104\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-104' and this object" logger="UnhandledError" Jan 23 00:06:18.381954 systemd[1]: Created slice kubepods-burstable-pod38f2e9b7_9f80_49bf_b70a_ba7f92f0ab91.slice - libcontainer container kubepods-burstable-pod38f2e9b7_9f80_49bf_b70a_ba7f92f0ab91.slice. Jan 23 00:06:18.397682 kubelet[3320]: I0123 00:06:18.397611 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-etc-cni-netd\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.397682 kubelet[3320]: I0123 00:06:18.397684 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-lib-modules\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.397912 kubelet[3320]: I0123 00:06:18.397723 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-config-path\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.397912 kubelet[3320]: I0123 00:06:18.397758 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-xtables-lock\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.397912 kubelet[3320]: I0123 00:06:18.397800 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43e98193-c401-4703-8ddc-8256ecfa58a1-xtables-lock\") pod \"kube-proxy-r8v5t\" (UID: \"43e98193-c401-4703-8ddc-8256ecfa58a1\") " pod="kube-system/kube-proxy-r8v5t" Jan 23 00:06:18.397912 kubelet[3320]: I0123 00:06:18.397835 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-cgroup\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.397912 kubelet[3320]: I0123 00:06:18.397872 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-host-proc-sys-net\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.398618 kubelet[3320]: I0123 00:06:18.397916 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-hubble-tls\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.398618 kubelet[3320]: I0123 00:06:18.397954 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-run\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.398618 kubelet[3320]: I0123 00:06:18.397989 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd22m\" (UniqueName: \"kubernetes.io/projected/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-kube-api-access-gd22m\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.398618 kubelet[3320]: I0123 00:06:18.398030 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-hostproc\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.398618 kubelet[3320]: I0123 00:06:18.398085 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cni-path\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.398618 kubelet[3320]: I0123 00:06:18.398155 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43e98193-c401-4703-8ddc-8256ecfa58a1-kube-proxy\") pod \"kube-proxy-r8v5t\" (UID: \"43e98193-c401-4703-8ddc-8256ecfa58a1\") " pod="kube-system/kube-proxy-r8v5t" Jan 23 00:06:18.399335 kubelet[3320]: I0123 00:06:18.398197 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43e98193-c401-4703-8ddc-8256ecfa58a1-lib-modules\") pod \"kube-proxy-r8v5t\" (UID: \"43e98193-c401-4703-8ddc-8256ecfa58a1\") " pod="kube-system/kube-proxy-r8v5t" Jan 23 00:06:18.399335 kubelet[3320]: I0123 00:06:18.398235 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phsv\" (UniqueName: \"kubernetes.io/projected/43e98193-c401-4703-8ddc-8256ecfa58a1-kube-api-access-8phsv\") pod \"kube-proxy-r8v5t\" (UID: \"43e98193-c401-4703-8ddc-8256ecfa58a1\") " pod="kube-system/kube-proxy-r8v5t" Jan 23 00:06:18.399335 kubelet[3320]: I0123 00:06:18.398272 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-bpf-maps\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.399335 kubelet[3320]: I0123 00:06:18.398306 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-clustermesh-secrets\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.399335 kubelet[3320]: I0123 00:06:18.398342 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-host-proc-sys-kernel\") pod \"cilium-6xcwn\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " pod="kube-system/cilium-6xcwn" Jan 23 00:06:18.494869 systemd[1]: Created slice kubepods-besteffort-podc33a51f7_5d6f_4d5d_aeee_c9d8bd6575dc.slice - libcontainer container kubepods-besteffort-podc33a51f7_5d6f_4d5d_aeee_c9d8bd6575dc.slice. Jan 23 00:06:18.601952 kubelet[3320]: I0123 00:06:18.599977 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ls7\" (UniqueName: \"kubernetes.io/projected/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc-kube-api-access-26ls7\") pod \"cilium-operator-6c4d7847fc-j92lv\" (UID: \"c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc\") " pod="kube-system/cilium-operator-6c4d7847fc-j92lv" Jan 23 00:06:18.603130 kubelet[3320]: I0123 00:06:18.602567 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-j92lv\" (UID: \"c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc\") " pod="kube-system/cilium-operator-6c4d7847fc-j92lv" Jan 23 00:06:18.694441 containerd[2007]: time="2026-01-23T00:06:18.692966476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xcwn,Uid:38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:18.744586 containerd[2007]: time="2026-01-23T00:06:18.744501364Z" level=info msg="connecting to shim 37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3" address="unix:///run/containerd/s/bf3446f4a6fa8b3d89c0266ced82ef50dad3a36409df5cd300fc1b76c2f6f72b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:18.785437 systemd[1]: Started cri-containerd-37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3.scope - libcontainer container 37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3. Jan 23 00:06:18.819991 containerd[2007]: time="2026-01-23T00:06:18.819858124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j92lv,Uid:c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:18.851334 containerd[2007]: time="2026-01-23T00:06:18.851238401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xcwn,Uid:38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91,Namespace:kube-system,Attempt:0,} returns sandbox id \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\"" Jan 23 00:06:18.857845 containerd[2007]: time="2026-01-23T00:06:18.857665793Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 00:06:18.859622 containerd[2007]: time="2026-01-23T00:06:18.859550549Z" level=info msg="connecting to shim 2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6" address="unix:///run/containerd/s/79739873cacfa84207990da7ee75c699e31e30cf4f4c1600f1cf2f152d8bfed5" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:18.901430 systemd[1]: Started cri-containerd-2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6.scope - libcontainer container 2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6. Jan 23 00:06:18.980701 containerd[2007]: time="2026-01-23T00:06:18.979943273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-j92lv,Uid:c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\"" Jan 23 00:06:19.511297 kubelet[3320]: E0123 00:06:19.511231 3320 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 00:06:19.511475 kubelet[3320]: E0123 00:06:19.511357 3320 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43e98193-c401-4703-8ddc-8256ecfa58a1-kube-proxy podName:43e98193-c401-4703-8ddc-8256ecfa58a1 nodeName:}" failed. No retries permitted until 2026-01-23 00:06:20.011324 +0000 UTC m=+8.111462927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/43e98193-c401-4703-8ddc-8256ecfa58a1-kube-proxy") pod "kube-proxy-r8v5t" (UID: "43e98193-c401-4703-8ddc-8256ecfa58a1") : failed to sync configmap cache: timed out waiting for the condition Jan 23 00:06:20.181588 containerd[2007]: time="2026-01-23T00:06:20.181433955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8v5t,Uid:43e98193-c401-4703-8ddc-8256ecfa58a1,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:20.218397 containerd[2007]: time="2026-01-23T00:06:20.218302755Z" level=info msg="connecting to shim 04b64021eeb398a46189e4b8a0268deb473acd2b5fee2deb51dd89d8d3c82054" address="unix:///run/containerd/s/ded4ebdd55f9e5b5a2bf8bcdeae285fbaacf4a4d77c99c2ced8e408c0db3c73e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:20.265427 systemd[1]: Started cri-containerd-04b64021eeb398a46189e4b8a0268deb473acd2b5fee2deb51dd89d8d3c82054.scope - libcontainer container 04b64021eeb398a46189e4b8a0268deb473acd2b5fee2deb51dd89d8d3c82054. Jan 23 00:06:20.319129 containerd[2007]: time="2026-01-23T00:06:20.319059520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r8v5t,Uid:43e98193-c401-4703-8ddc-8256ecfa58a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"04b64021eeb398a46189e4b8a0268deb473acd2b5fee2deb51dd89d8d3c82054\"" Jan 23 00:06:20.326905 containerd[2007]: time="2026-01-23T00:06:20.326531380Z" level=info msg="CreateContainer within sandbox \"04b64021eeb398a46189e4b8a0268deb473acd2b5fee2deb51dd89d8d3c82054\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:06:20.343135 containerd[2007]: time="2026-01-23T00:06:20.343055980Z" level=info msg="Container 487426016f77dac778641f7e5e76a35d7886d5896ceddbfbe5d4ef7d5d335f33: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:20.352946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603294542.mount: Deactivated successfully. Jan 23 00:06:20.363889 containerd[2007]: time="2026-01-23T00:06:20.363816412Z" level=info msg="CreateContainer within sandbox \"04b64021eeb398a46189e4b8a0268deb473acd2b5fee2deb51dd89d8d3c82054\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"487426016f77dac778641f7e5e76a35d7886d5896ceddbfbe5d4ef7d5d335f33\"" Jan 23 00:06:20.367212 containerd[2007]: time="2026-01-23T00:06:20.365853124Z" level=info msg="StartContainer for \"487426016f77dac778641f7e5e76a35d7886d5896ceddbfbe5d4ef7d5d335f33\"" Jan 23 00:06:20.369381 containerd[2007]: time="2026-01-23T00:06:20.369250804Z" level=info msg="connecting to shim 487426016f77dac778641f7e5e76a35d7886d5896ceddbfbe5d4ef7d5d335f33" address="unix:///run/containerd/s/ded4ebdd55f9e5b5a2bf8bcdeae285fbaacf4a4d77c99c2ced8e408c0db3c73e" protocol=ttrpc version=3 Jan 23 00:06:20.431490 systemd[1]: Started cri-containerd-487426016f77dac778641f7e5e76a35d7886d5896ceddbfbe5d4ef7d5d335f33.scope - libcontainer container 487426016f77dac778641f7e5e76a35d7886d5896ceddbfbe5d4ef7d5d335f33. Jan 23 00:06:20.591301 containerd[2007]: time="2026-01-23T00:06:20.591002609Z" level=info msg="StartContainer for \"487426016f77dac778641f7e5e76a35d7886d5896ceddbfbe5d4ef7d5d335f33\" returns successfully" Jan 23 00:06:23.897209 kubelet[3320]: I0123 00:06:23.896861 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r8v5t" podStartSLOduration=5.8968202739999995 podStartE2EDuration="5.896820274s" podCreationTimestamp="2026-01-23 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:06:20.755994186 +0000 UTC m=+8.856133137" watchObservedRunningTime="2026-01-23 00:06:23.896820274 +0000 UTC m=+11.996959453" Jan 23 00:06:29.357593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846954909.mount: Deactivated successfully. Jan 23 00:06:31.921143 containerd[2007]: time="2026-01-23T00:06:31.921026670Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:31.936967 containerd[2007]: time="2026-01-23T00:06:31.936768930Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 23 00:06:31.961132 containerd[2007]: time="2026-01-23T00:06:31.960313230Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:31.964842 containerd[2007]: time="2026-01-23T00:06:31.964520574Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.106275385s" Jan 23 00:06:31.964982 containerd[2007]: time="2026-01-23T00:06:31.964837086Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 23 00:06:31.970240 containerd[2007]: time="2026-01-23T00:06:31.969805494Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 00:06:31.975695 containerd[2007]: time="2026-01-23T00:06:31.975542562Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 00:06:31.999375 containerd[2007]: time="2026-01-23T00:06:31.999300318Z" level=info msg="Container 24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:32.016620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195959013.mount: Deactivated successfully. Jan 23 00:06:32.027881 containerd[2007]: time="2026-01-23T00:06:32.027812894Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\"" Jan 23 00:06:32.029223 containerd[2007]: time="2026-01-23T00:06:32.028997234Z" level=info msg="StartContainer for \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\"" Jan 23 00:06:32.032051 containerd[2007]: time="2026-01-23T00:06:32.031985834Z" level=info msg="connecting to shim 24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e" address="unix:///run/containerd/s/bf3446f4a6fa8b3d89c0266ced82ef50dad3a36409df5cd300fc1b76c2f6f72b" protocol=ttrpc version=3 Jan 23 00:06:32.067436 systemd[1]: Started cri-containerd-24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e.scope - libcontainer container 24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e. Jan 23 00:06:32.132707 containerd[2007]: time="2026-01-23T00:06:32.132640935Z" level=info msg="StartContainer for \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\" returns successfully" Jan 23 00:06:32.160335 systemd[1]: cri-containerd-24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e.scope: Deactivated successfully. Jan 23 00:06:32.169950 containerd[2007]: time="2026-01-23T00:06:32.169900035Z" level=info msg="received container exit event container_id:\"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\" id:\"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\" pid:3917 exited_at:{seconds:1769126792 nanos:168695199}" Jan 23 00:06:32.210382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e-rootfs.mount: Deactivated successfully. Jan 23 00:06:33.760875 containerd[2007]: time="2026-01-23T00:06:33.760795291Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 00:06:33.784495 containerd[2007]: time="2026-01-23T00:06:33.783404707Z" level=info msg="Container 3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:33.795806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187309696.mount: Deactivated successfully. Jan 23 00:06:33.808442 containerd[2007]: time="2026-01-23T00:06:33.808288303Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\"" Jan 23 00:06:33.814139 containerd[2007]: time="2026-01-23T00:06:33.813340063Z" level=info msg="StartContainer for \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\"" Jan 23 00:06:33.821239 containerd[2007]: time="2026-01-23T00:06:33.821077555Z" level=info msg="connecting to shim 3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303" address="unix:///run/containerd/s/bf3446f4a6fa8b3d89c0266ced82ef50dad3a36409df5cd300fc1b76c2f6f72b" protocol=ttrpc version=3 Jan 23 00:06:33.880432 systemd[1]: Started cri-containerd-3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303.scope - libcontainer container 3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303. Jan 23 00:06:33.938320 containerd[2007]: time="2026-01-23T00:06:33.938264708Z" level=info msg="StartContainer for \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\" returns successfully" Jan 23 00:06:33.964469 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:06:33.965716 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:33.967404 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:06:33.971656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:06:33.977705 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:06:33.978550 containerd[2007]: time="2026-01-23T00:06:33.978483824Z" level=info msg="received container exit event container_id:\"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\" id:\"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\" pid:3969 exited_at:{seconds:1769126793 nanos:976728476}" Jan 23 00:06:33.979517 systemd[1]: cri-containerd-3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303.scope: Deactivated successfully. Jan 23 00:06:34.017262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:34.778546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303-rootfs.mount: Deactivated successfully. Jan 23 00:06:34.781014 containerd[2007]: time="2026-01-23T00:06:34.780822500Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 00:06:34.810901 containerd[2007]: time="2026-01-23T00:06:34.810785840Z" level=info msg="Container 5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:34.839357 containerd[2007]: time="2026-01-23T00:06:34.838561604Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\"" Jan 23 00:06:34.845118 containerd[2007]: time="2026-01-23T00:06:34.844376180Z" level=info msg="StartContainer for \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\"" Jan 23 00:06:34.851418 containerd[2007]: time="2026-01-23T00:06:34.851337824Z" level=info msg="connecting to shim 5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de" address="unix:///run/containerd/s/bf3446f4a6fa8b3d89c0266ced82ef50dad3a36409df5cd300fc1b76c2f6f72b" protocol=ttrpc version=3 Jan 23 00:06:34.900858 systemd[1]: Started cri-containerd-5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de.scope - libcontainer container 5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de. Jan 23 00:06:35.020475 containerd[2007]: time="2026-01-23T00:06:35.020398397Z" level=info msg="StartContainer for \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\" returns successfully" Jan 23 00:06:35.024369 systemd[1]: cri-containerd-5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de.scope: Deactivated successfully. Jan 23 00:06:35.033013 containerd[2007]: time="2026-01-23T00:06:35.032396741Z" level=info msg="received container exit event container_id:\"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\" id:\"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\" pid:4028 exited_at:{seconds:1769126795 nanos:31875161}" Jan 23 00:06:35.072563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de-rootfs.mount: Deactivated successfully. Jan 23 00:06:35.781543 containerd[2007]: time="2026-01-23T00:06:35.781457973Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 00:06:35.809225 containerd[2007]: time="2026-01-23T00:06:35.809096673Z" level=info msg="Container ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:35.830275 containerd[2007]: time="2026-01-23T00:06:35.830199861Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\"" Jan 23 00:06:35.831533 containerd[2007]: time="2026-01-23T00:06:35.831347445Z" level=info msg="StartContainer for \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\"" Jan 23 00:06:35.836912 containerd[2007]: time="2026-01-23T00:06:35.836821593Z" level=info msg="connecting to shim ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef" address="unix:///run/containerd/s/bf3446f4a6fa8b3d89c0266ced82ef50dad3a36409df5cd300fc1b76c2f6f72b" protocol=ttrpc version=3 Jan 23 00:06:35.877488 systemd[1]: Started cri-containerd-ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef.scope - libcontainer container ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef. Jan 23 00:06:35.948899 systemd[1]: cri-containerd-ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef.scope: Deactivated successfully. Jan 23 00:06:35.959360 containerd[2007]: time="2026-01-23T00:06:35.959176114Z" level=info msg="received container exit event container_id:\"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\" id:\"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\" pid:4070 exited_at:{seconds:1769126795 nanos:955940470}" Jan 23 00:06:35.964467 containerd[2007]: time="2026-01-23T00:06:35.964304890Z" level=info msg="StartContainer for \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\" returns successfully" Jan 23 00:06:36.022099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef-rootfs.mount: Deactivated successfully. Jan 23 00:06:36.505576 containerd[2007]: time="2026-01-23T00:06:36.505323704Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:36.507360 containerd[2007]: time="2026-01-23T00:06:36.507279812Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 23 00:06:36.508576 containerd[2007]: time="2026-01-23T00:06:36.508076012Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:36.513334 containerd[2007]: time="2026-01-23T00:06:36.513248984Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.543372834s" Jan 23 00:06:36.513519 containerd[2007]: time="2026-01-23T00:06:36.513392888Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 23 00:06:36.518457 containerd[2007]: time="2026-01-23T00:06:36.518392916Z" level=info msg="CreateContainer within sandbox \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 00:06:36.535423 containerd[2007]: time="2026-01-23T00:06:36.535357412Z" level=info msg="Container 72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:36.551903 containerd[2007]: time="2026-01-23T00:06:36.551793321Z" level=info msg="CreateContainer within sandbox \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\"" Jan 23 00:06:36.553480 containerd[2007]: time="2026-01-23T00:06:36.553266597Z" level=info msg="StartContainer for \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\"" Jan 23 00:06:36.556177 containerd[2007]: time="2026-01-23T00:06:36.556097157Z" level=info msg="connecting to shim 72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9" address="unix:///run/containerd/s/79739873cacfa84207990da7ee75c699e31e30cf4f4c1600f1cf2f152d8bfed5" protocol=ttrpc version=3 Jan 23 00:06:36.592437 systemd[1]: Started cri-containerd-72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9.scope - libcontainer container 72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9. Jan 23 00:06:36.649471 containerd[2007]: time="2026-01-23T00:06:36.649310169Z" level=info msg="StartContainer for \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" returns successfully" Jan 23 00:06:36.808579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579586680.mount: Deactivated successfully. Jan 23 00:06:36.816599 containerd[2007]: time="2026-01-23T00:06:36.816499582Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 00:06:36.855399 containerd[2007]: time="2026-01-23T00:06:36.855321550Z" level=info msg="Container 381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:36.859859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537104753.mount: Deactivated successfully. Jan 23 00:06:36.868626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963234592.mount: Deactivated successfully. Jan 23 00:06:36.905471 containerd[2007]: time="2026-01-23T00:06:36.905378686Z" level=info msg="CreateContainer within sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\"" Jan 23 00:06:36.907391 containerd[2007]: time="2026-01-23T00:06:36.907325830Z" level=info msg="StartContainer for \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\"" Jan 23 00:06:36.911878 containerd[2007]: time="2026-01-23T00:06:36.911808454Z" level=info msg="connecting to shim 381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18" address="unix:///run/containerd/s/bf3446f4a6fa8b3d89c0266ced82ef50dad3a36409df5cd300fc1b76c2f6f72b" protocol=ttrpc version=3 Jan 23 00:06:36.942387 kubelet[3320]: I0123 00:06:36.941335 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-j92lv" podStartSLOduration=1.4086092909999999 podStartE2EDuration="18.941311486s" podCreationTimestamp="2026-01-23 00:06:18 +0000 UTC" firstStartedPulling="2026-01-23 00:06:18.982584173 +0000 UTC m=+7.082723100" lastFinishedPulling="2026-01-23 00:06:36.515286368 +0000 UTC m=+24.615425295" observedRunningTime="2026-01-23 00:06:36.827864098 +0000 UTC m=+24.928003109" watchObservedRunningTime="2026-01-23 00:06:36.941311486 +0000 UTC m=+25.041450425" Jan 23 00:06:36.982184 systemd[1]: Started cri-containerd-381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18.scope - libcontainer container 381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18. Jan 23 00:06:37.119825 containerd[2007]: time="2026-01-23T00:06:37.119665567Z" level=info msg="StartContainer for \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" returns successfully" Jan 23 00:06:37.436257 kubelet[3320]: I0123 00:06:37.435006 3320 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 00:06:37.504212 systemd[1]: Created slice kubepods-burstable-pod38f08757_a99f_45a9_8b7f_f247e7d0f265.slice - libcontainer container kubepods-burstable-pod38f08757_a99f_45a9_8b7f_f247e7d0f265.slice. Jan 23 00:06:37.524758 systemd[1]: Created slice kubepods-burstable-podd550c164_6c20_4574_b63a_bb6e37714081.slice - libcontainer container kubepods-burstable-podd550c164_6c20_4574_b63a_bb6e37714081.slice. Jan 23 00:06:37.548939 kubelet[3320]: I0123 00:06:37.548868 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjng\" (UniqueName: \"kubernetes.io/projected/38f08757-a99f-45a9-8b7f-f247e7d0f265-kube-api-access-8gjng\") pod \"coredns-668d6bf9bc-tqmq5\" (UID: \"38f08757-a99f-45a9-8b7f-f247e7d0f265\") " pod="kube-system/coredns-668d6bf9bc-tqmq5" Jan 23 00:06:37.550131 kubelet[3320]: I0123 00:06:37.549437 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38f08757-a99f-45a9-8b7f-f247e7d0f265-config-volume\") pod \"coredns-668d6bf9bc-tqmq5\" (UID: \"38f08757-a99f-45a9-8b7f-f247e7d0f265\") " pod="kube-system/coredns-668d6bf9bc-tqmq5" Jan 23 00:06:37.550131 kubelet[3320]: I0123 00:06:37.549518 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d550c164-6c20-4574-b63a-bb6e37714081-config-volume\") pod \"coredns-668d6bf9bc-m9xqn\" (UID: \"d550c164-6c20-4574-b63a-bb6e37714081\") " pod="kube-system/coredns-668d6bf9bc-m9xqn" Jan 23 00:06:37.550131 kubelet[3320]: I0123 00:06:37.549600 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8wk\" (UniqueName: \"kubernetes.io/projected/d550c164-6c20-4574-b63a-bb6e37714081-kube-api-access-7j8wk\") pod \"coredns-668d6bf9bc-m9xqn\" (UID: \"d550c164-6c20-4574-b63a-bb6e37714081\") " pod="kube-system/coredns-668d6bf9bc-m9xqn" Jan 23 00:06:37.817544 containerd[2007]: time="2026-01-23T00:06:37.817494299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tqmq5,Uid:38f08757-a99f-45a9-8b7f-f247e7d0f265,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:37.838163 containerd[2007]: time="2026-01-23T00:06:37.836680607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9xqn,Uid:d550c164-6c20-4574-b63a-bb6e37714081,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:42.034042 systemd-networkd[1817]: cilium_host: Link UP Jan 23 00:06:42.036504 systemd-networkd[1817]: cilium_net: Link UP Jan 23 00:06:42.036911 systemd-networkd[1817]: cilium_net: Gained carrier Jan 23 00:06:42.040497 (udev-worker)[4281]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:06:42.042224 (udev-worker)[4240]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:06:42.043011 systemd-networkd[1817]: cilium_host: Gained carrier Jan 23 00:06:42.254214 systemd-networkd[1817]: cilium_vxlan: Link UP Jan 23 00:06:42.254236 systemd-networkd[1817]: cilium_vxlan: Gained carrier Jan 23 00:06:42.854164 kernel: NET: Registered PF_ALG protocol family Jan 23 00:06:42.855333 systemd-networkd[1817]: cilium_host: Gained IPv6LL Jan 23 00:06:42.919391 systemd-networkd[1817]: cilium_net: Gained IPv6LL Jan 23 00:06:43.817010 systemd-networkd[1817]: cilium_vxlan: Gained IPv6LL Jan 23 00:06:44.301179 (udev-worker)[4292]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:06:44.313532 systemd-networkd[1817]: lxc_health: Link UP Jan 23 00:06:44.319039 systemd-networkd[1817]: lxc_health: Gained carrier Jan 23 00:06:44.728631 kubelet[3320]: I0123 00:06:44.728438 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6xcwn" podStartSLOduration=13.614484616 podStartE2EDuration="26.728414729s" podCreationTimestamp="2026-01-23 00:06:18 +0000 UTC" firstStartedPulling="2026-01-23 00:06:18.854759237 +0000 UTC m=+6.954898152" lastFinishedPulling="2026-01-23 00:06:31.968689338 +0000 UTC m=+20.068828265" observedRunningTime="2026-01-23 00:06:38.013421492 +0000 UTC m=+26.113560443" watchObservedRunningTime="2026-01-23 00:06:44.728414729 +0000 UTC m=+32.828553656" Jan 23 00:06:44.965318 kernel: eth0: renamed from tmp95e33 Jan 23 00:06:44.968360 systemd-networkd[1817]: lxcdff35bc241cf: Link UP Jan 23 00:06:44.973424 systemd-networkd[1817]: lxcdff35bc241cf: Gained carrier Jan 23 00:06:44.990160 kernel: eth0: renamed from tmp695a3 Jan 23 00:06:44.994430 systemd-networkd[1817]: lxc663807e556e8: Link UP Jan 23 00:06:44.997629 systemd-networkd[1817]: lxc663807e556e8: Gained carrier Jan 23 00:06:45.544594 systemd-networkd[1817]: lxc_health: Gained IPv6LL Jan 23 00:06:46.119363 systemd-networkd[1817]: lxcdff35bc241cf: Gained IPv6LL Jan 23 00:06:46.760064 systemd-networkd[1817]: lxc663807e556e8: Gained IPv6LL Jan 23 00:06:49.219553 ntpd[2206]: Listen normally on 6 cilium_host 192.168.0.140:123 Jan 23 00:06:49.222350 ntpd[2206]: 23 Jan 00:06:49 ntpd[2206]: Listen normally on 6 cilium_host 192.168.0.140:123 Jan 23 00:06:49.222350 ntpd[2206]: 23 Jan 00:06:49 ntpd[2206]: Listen normally on 7 cilium_net [fe80::946b:e0ff:fed0:30f%4]:123 Jan 23 00:06:49.222350 ntpd[2206]: 23 Jan 00:06:49 ntpd[2206]: Listen normally on 8 cilium_host [fe80::d4ff:e9ff:fe12:87d%5]:123 Jan 23 00:06:49.222350 ntpd[2206]: 23 Jan 00:06:49 ntpd[2206]: Listen normally on 9 cilium_vxlan [fe80::e49b:d7ff:fe7f:5f08%6]:123 Jan 23 00:06:49.222350 ntpd[2206]: 23 Jan 00:06:49 ntpd[2206]: Listen normally on 10 lxc_health [fe80::dce3:50ff:fec8:54ca%8]:123 Jan 23 00:06:49.222350 ntpd[2206]: 23 Jan 00:06:49 ntpd[2206]: Listen normally on 11 lxcdff35bc241cf [fe80::f494:6ff:fefa:e71c%10]:123 Jan 23 00:06:49.222350 ntpd[2206]: 23 Jan 00:06:49 ntpd[2206]: Listen normally on 12 lxc663807e556e8 [fe80::809c:5dff:fe69:bb73%12]:123 Jan 23 00:06:49.219630 ntpd[2206]: Listen normally on 7 cilium_net [fe80::946b:e0ff:fed0:30f%4]:123 Jan 23 00:06:49.219678 ntpd[2206]: Listen normally on 8 cilium_host [fe80::d4ff:e9ff:fe12:87d%5]:123 Jan 23 00:06:49.219723 ntpd[2206]: Listen normally on 9 cilium_vxlan [fe80::e49b:d7ff:fe7f:5f08%6]:123 Jan 23 00:06:49.219767 ntpd[2206]: Listen normally on 10 lxc_health [fe80::dce3:50ff:fec8:54ca%8]:123 Jan 23 00:06:49.219811 ntpd[2206]: Listen normally on 11 lxcdff35bc241cf [fe80::f494:6ff:fefa:e71c%10]:123 Jan 23 00:06:49.219859 ntpd[2206]: Listen normally on 12 lxc663807e556e8 [fe80::809c:5dff:fe69:bb73%12]:123 Jan 23 00:06:53.504495 containerd[2007]: time="2026-01-23T00:06:53.504373429Z" level=info msg="connecting to shim 695a38406f96a253929a9fdcfa59b065e3f138c3ba24242c9662614b772ceb20" address="unix:///run/containerd/s/6c5d8052924f19d6a76f01c3179ea1b55b5eb4f97fbdde547e907c511137c1ed" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:53.593544 systemd[1]: Started cri-containerd-695a38406f96a253929a9fdcfa59b065e3f138c3ba24242c9662614b772ceb20.scope - libcontainer container 695a38406f96a253929a9fdcfa59b065e3f138c3ba24242c9662614b772ceb20. Jan 23 00:06:53.608182 containerd[2007]: time="2026-01-23T00:06:53.607523461Z" level=info msg="connecting to shim 95e33bb47a4370231bda75660c8b17e864d0c4f9cd9bde04f999a3a9ec4b7092" address="unix:///run/containerd/s/5d4ecac09e2b90f6e78a93670a173cbc219ca307f152acc6b1d008c2b19ddfb4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:53.688340 systemd[1]: Started cri-containerd-95e33bb47a4370231bda75660c8b17e864d0c4f9cd9bde04f999a3a9ec4b7092.scope - libcontainer container 95e33bb47a4370231bda75660c8b17e864d0c4f9cd9bde04f999a3a9ec4b7092. Jan 23 00:06:53.835670 containerd[2007]: time="2026-01-23T00:06:53.834983486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m9xqn,Uid:d550c164-6c20-4574-b63a-bb6e37714081,Namespace:kube-system,Attempt:0,} returns sandbox id \"95e33bb47a4370231bda75660c8b17e864d0c4f9cd9bde04f999a3a9ec4b7092\"" Jan 23 00:06:53.839391 containerd[2007]: time="2026-01-23T00:06:53.839076350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tqmq5,Uid:38f08757-a99f-45a9-8b7f-f247e7d0f265,Namespace:kube-system,Attempt:0,} returns sandbox id \"695a38406f96a253929a9fdcfa59b065e3f138c3ba24242c9662614b772ceb20\"" Jan 23 00:06:53.847893 containerd[2007]: time="2026-01-23T00:06:53.847677926Z" level=info msg="CreateContainer within sandbox \"695a38406f96a253929a9fdcfa59b065e3f138c3ba24242c9662614b772ceb20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:06:53.849998 containerd[2007]: time="2026-01-23T00:06:53.849215174Z" level=info msg="CreateContainer within sandbox \"95e33bb47a4370231bda75660c8b17e864d0c4f9cd9bde04f999a3a9ec4b7092\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:06:53.875606 containerd[2007]: time="2026-01-23T00:06:53.875530299Z" level=info msg="Container 8d431bb33c49962455fd4f574f99ce1a698931d889eee2846f5597044e4e74ea: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:53.880822 containerd[2007]: time="2026-01-23T00:06:53.880764183Z" level=info msg="Container b6a4d564b18d8d72bc840e175f30a5c4730dc9e4348043a6ba16bc37e20bb51b: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:53.889618 containerd[2007]: time="2026-01-23T00:06:53.889556763Z" level=info msg="CreateContainer within sandbox \"695a38406f96a253929a9fdcfa59b065e3f138c3ba24242c9662614b772ceb20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d431bb33c49962455fd4f574f99ce1a698931d889eee2846f5597044e4e74ea\"" Jan 23 00:06:53.891719 containerd[2007]: time="2026-01-23T00:06:53.891600711Z" level=info msg="StartContainer for \"8d431bb33c49962455fd4f574f99ce1a698931d889eee2846f5597044e4e74ea\"" Jan 23 00:06:53.894662 containerd[2007]: time="2026-01-23T00:06:53.894531423Z" level=info msg="connecting to shim 8d431bb33c49962455fd4f574f99ce1a698931d889eee2846f5597044e4e74ea" address="unix:///run/containerd/s/6c5d8052924f19d6a76f01c3179ea1b55b5eb4f97fbdde547e907c511137c1ed" protocol=ttrpc version=3 Jan 23 00:06:53.896585 containerd[2007]: time="2026-01-23T00:06:53.896504139Z" level=info msg="CreateContainer within sandbox \"95e33bb47a4370231bda75660c8b17e864d0c4f9cd9bde04f999a3a9ec4b7092\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6a4d564b18d8d72bc840e175f30a5c4730dc9e4348043a6ba16bc37e20bb51b\"" Jan 23 00:06:53.897788 containerd[2007]: time="2026-01-23T00:06:53.897743583Z" level=info msg="StartContainer for \"b6a4d564b18d8d72bc840e175f30a5c4730dc9e4348043a6ba16bc37e20bb51b\"" Jan 23 00:06:53.912044 containerd[2007]: time="2026-01-23T00:06:53.911899959Z" level=info msg="connecting to shim b6a4d564b18d8d72bc840e175f30a5c4730dc9e4348043a6ba16bc37e20bb51b" address="unix:///run/containerd/s/5d4ecac09e2b90f6e78a93670a173cbc219ca307f152acc6b1d008c2b19ddfb4" protocol=ttrpc version=3 Jan 23 00:06:53.957563 systemd[1]: Started cri-containerd-8d431bb33c49962455fd4f574f99ce1a698931d889eee2846f5597044e4e74ea.scope - libcontainer container 8d431bb33c49962455fd4f574f99ce1a698931d889eee2846f5597044e4e74ea. Jan 23 00:06:53.970813 systemd[1]: Started cri-containerd-b6a4d564b18d8d72bc840e175f30a5c4730dc9e4348043a6ba16bc37e20bb51b.scope - libcontainer container b6a4d564b18d8d72bc840e175f30a5c4730dc9e4348043a6ba16bc37e20bb51b. Jan 23 00:06:54.063618 containerd[2007]: time="2026-01-23T00:06:54.063508548Z" level=info msg="StartContainer for \"8d431bb33c49962455fd4f574f99ce1a698931d889eee2846f5597044e4e74ea\" returns successfully" Jan 23 00:06:54.072573 containerd[2007]: time="2026-01-23T00:06:54.072408096Z" level=info msg="StartContainer for \"b6a4d564b18d8d72bc840e175f30a5c4730dc9e4348043a6ba16bc37e20bb51b\" returns successfully" Jan 23 00:06:54.492917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071174039.mount: Deactivated successfully. Jan 23 00:06:54.966238 kubelet[3320]: I0123 00:06:54.965829 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m9xqn" podStartSLOduration=36.965803468 podStartE2EDuration="36.965803468s" podCreationTimestamp="2026-01-23 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:06:54.964031488 +0000 UTC m=+43.064170415" watchObservedRunningTime="2026-01-23 00:06:54.965803468 +0000 UTC m=+43.065942383" Jan 23 00:06:54.991145 kubelet[3320]: I0123 00:06:54.989937 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tqmq5" podStartSLOduration=36.989914744000004 podStartE2EDuration="36.989914744s" podCreationTimestamp="2026-01-23 00:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:06:54.989568124 +0000 UTC m=+43.089707051" watchObservedRunningTime="2026-01-23 00:06:54.989914744 +0000 UTC m=+43.090053683" Jan 23 00:07:00.108665 systemd[1]: Started sshd@7-172.31.17.104:22-4.153.228.146:33538.service - OpenSSH per-connection server daemon (4.153.228.146:33538). Jan 23 00:07:00.639523 sshd[4829]: Accepted publickey for core from 4.153.228.146 port 33538 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:00.641849 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:00.650206 systemd-logind[1973]: New session 8 of user core. Jan 23 00:07:00.661453 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:07:01.138036 sshd[4832]: Connection closed by 4.153.228.146 port 33538 Jan 23 00:07:01.138919 sshd-session[4829]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:01.147646 systemd[1]: sshd@7-172.31.17.104:22-4.153.228.146:33538.service: Deactivated successfully. Jan 23 00:07:01.153369 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:07:01.156592 systemd-logind[1973]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:07:01.161909 systemd-logind[1973]: Removed session 8. Jan 23 00:07:06.229865 systemd[1]: Started sshd@8-172.31.17.104:22-4.153.228.146:50712.service - OpenSSH per-connection server daemon (4.153.228.146:50712). Jan 23 00:07:06.749938 sshd[4845]: Accepted publickey for core from 4.153.228.146 port 50712 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:06.752354 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:06.761049 systemd-logind[1973]: New session 9 of user core. Jan 23 00:07:06.766433 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:07:07.224265 sshd[4848]: Connection closed by 4.153.228.146 port 50712 Jan 23 00:07:07.225049 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:07.232891 systemd[1]: sshd@8-172.31.17.104:22-4.153.228.146:50712.service: Deactivated successfully. Jan 23 00:07:07.238628 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 00:07:07.241430 systemd-logind[1973]: Session 9 logged out. Waiting for processes to exit. Jan 23 00:07:07.245442 systemd-logind[1973]: Removed session 9. Jan 23 00:07:12.321351 systemd[1]: Started sshd@9-172.31.17.104:22-4.153.228.146:50728.service - OpenSSH per-connection server daemon (4.153.228.146:50728). Jan 23 00:07:12.853654 sshd[4861]: Accepted publickey for core from 4.153.228.146 port 50728 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:12.855997 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:12.868664 systemd-logind[1973]: New session 10 of user core. Jan 23 00:07:12.872384 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 00:07:13.334810 sshd[4866]: Connection closed by 4.153.228.146 port 50728 Jan 23 00:07:13.335606 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:13.346899 systemd[1]: sshd@9-172.31.17.104:22-4.153.228.146:50728.service: Deactivated successfully. Jan 23 00:07:13.353629 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 00:07:13.358532 systemd-logind[1973]: Session 10 logged out. Waiting for processes to exit. Jan 23 00:07:13.362659 systemd-logind[1973]: Removed session 10. Jan 23 00:07:18.446535 systemd[1]: Started sshd@10-172.31.17.104:22-4.153.228.146:56510.service - OpenSSH per-connection server daemon (4.153.228.146:56510). Jan 23 00:07:19.011152 sshd[4878]: Accepted publickey for core from 4.153.228.146 port 56510 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:19.013768 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:19.022595 systemd-logind[1973]: New session 11 of user core. Jan 23 00:07:19.035406 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 00:07:19.503745 sshd[4881]: Connection closed by 4.153.228.146 port 56510 Jan 23 00:07:19.505442 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:19.516633 systemd[1]: sshd@10-172.31.17.104:22-4.153.228.146:56510.service: Deactivated successfully. Jan 23 00:07:19.522821 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 00:07:19.526018 systemd-logind[1973]: Session 11 logged out. Waiting for processes to exit. Jan 23 00:07:19.529437 systemd-logind[1973]: Removed session 11. Jan 23 00:07:19.590232 systemd[1]: Started sshd@11-172.31.17.104:22-4.153.228.146:56514.service - OpenSSH per-connection server daemon (4.153.228.146:56514). Jan 23 00:07:20.113167 sshd[4894]: Accepted publickey for core from 4.153.228.146 port 56514 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:20.114778 sshd-session[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:20.123702 systemd-logind[1973]: New session 12 of user core. Jan 23 00:07:20.134397 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 00:07:20.701905 sshd[4897]: Connection closed by 4.153.228.146 port 56514 Jan 23 00:07:20.702403 sshd-session[4894]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:20.710543 systemd[1]: sshd@11-172.31.17.104:22-4.153.228.146:56514.service: Deactivated successfully. Jan 23 00:07:20.715502 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 00:07:20.720148 systemd-logind[1973]: Session 12 logged out. Waiting for processes to exit. Jan 23 00:07:20.722126 systemd-logind[1973]: Removed session 12. Jan 23 00:07:20.805669 systemd[1]: Started sshd@12-172.31.17.104:22-4.153.228.146:56524.service - OpenSSH per-connection server daemon (4.153.228.146:56524). Jan 23 00:07:21.387587 sshd[4907]: Accepted publickey for core from 4.153.228.146 port 56524 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:21.389810 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:21.397182 systemd-logind[1973]: New session 13 of user core. Jan 23 00:07:21.408412 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 00:07:21.880243 sshd[4912]: Connection closed by 4.153.228.146 port 56524 Jan 23 00:07:21.880095 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:21.887692 systemd[1]: sshd@12-172.31.17.104:22-4.153.228.146:56524.service: Deactivated successfully. Jan 23 00:07:21.892901 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 00:07:21.898319 systemd-logind[1973]: Session 13 logged out. Waiting for processes to exit. Jan 23 00:07:21.900805 systemd-logind[1973]: Removed session 13. Jan 23 00:07:26.976794 systemd[1]: Started sshd@13-172.31.17.104:22-4.153.228.146:55196.service - OpenSSH per-connection server daemon (4.153.228.146:55196). Jan 23 00:07:27.491031 sshd[4924]: Accepted publickey for core from 4.153.228.146 port 55196 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:27.493546 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:27.503421 systemd-logind[1973]: New session 14 of user core. Jan 23 00:07:27.513450 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 00:07:27.964734 sshd[4927]: Connection closed by 4.153.228.146 port 55196 Jan 23 00:07:27.964454 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:27.972974 systemd[1]: sshd@13-172.31.17.104:22-4.153.228.146:55196.service: Deactivated successfully. Jan 23 00:07:27.977625 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 00:07:27.980685 systemd-logind[1973]: Session 14 logged out. Waiting for processes to exit. Jan 23 00:07:27.984015 systemd-logind[1973]: Removed session 14. Jan 23 00:07:33.073293 systemd[1]: Started sshd@14-172.31.17.104:22-4.153.228.146:55210.service - OpenSSH per-connection server daemon (4.153.228.146:55210). Jan 23 00:07:33.646801 sshd[4940]: Accepted publickey for core from 4.153.228.146 port 55210 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:33.649823 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:33.658696 systemd-logind[1973]: New session 15 of user core. Jan 23 00:07:33.664348 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 00:07:34.159496 sshd[4943]: Connection closed by 4.153.228.146 port 55210 Jan 23 00:07:34.160737 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:34.169072 systemd-logind[1973]: Session 15 logged out. Waiting for processes to exit. Jan 23 00:07:34.170568 systemd[1]: sshd@14-172.31.17.104:22-4.153.228.146:55210.service: Deactivated successfully. Jan 23 00:07:34.175723 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 00:07:34.179534 systemd-logind[1973]: Removed session 15. Jan 23 00:07:39.260545 systemd[1]: Started sshd@15-172.31.17.104:22-4.153.228.146:39100.service - OpenSSH per-connection server daemon (4.153.228.146:39100). Jan 23 00:07:39.818887 sshd[4956]: Accepted publickey for core from 4.153.228.146 port 39100 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:39.821260 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:39.831204 systemd-logind[1973]: New session 16 of user core. Jan 23 00:07:39.839387 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 00:07:40.314720 sshd[4959]: Connection closed by 4.153.228.146 port 39100 Jan 23 00:07:40.315653 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:40.322557 systemd[1]: sshd@15-172.31.17.104:22-4.153.228.146:39100.service: Deactivated successfully. Jan 23 00:07:40.327523 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 00:07:40.330291 systemd-logind[1973]: Session 16 logged out. Waiting for processes to exit. Jan 23 00:07:40.333817 systemd-logind[1973]: Removed session 16. Jan 23 00:07:40.401013 systemd[1]: Started sshd@16-172.31.17.104:22-4.153.228.146:39108.service - OpenSSH per-connection server daemon (4.153.228.146:39108). Jan 23 00:07:40.927322 sshd[4971]: Accepted publickey for core from 4.153.228.146 port 39108 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:40.929727 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:40.938341 systemd-logind[1973]: New session 17 of user core. Jan 23 00:07:40.944380 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 00:07:41.476320 sshd[4974]: Connection closed by 4.153.228.146 port 39108 Jan 23 00:07:41.477306 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:41.487273 systemd-logind[1973]: Session 17 logged out. Waiting for processes to exit. Jan 23 00:07:41.488091 systemd[1]: sshd@16-172.31.17.104:22-4.153.228.146:39108.service: Deactivated successfully. Jan 23 00:07:41.492964 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 00:07:41.496570 systemd-logind[1973]: Removed session 17. Jan 23 00:07:41.572254 systemd[1]: Started sshd@17-172.31.17.104:22-4.153.228.146:39112.service - OpenSSH per-connection server daemon (4.153.228.146:39112). Jan 23 00:07:42.090418 sshd[4984]: Accepted publickey for core from 4.153.228.146 port 39112 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:42.092617 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:42.102204 systemd-logind[1973]: New session 18 of user core. Jan 23 00:07:42.112417 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 00:07:43.451090 sshd[4987]: Connection closed by 4.153.228.146 port 39112 Jan 23 00:07:43.452262 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:43.468044 systemd[1]: sshd@17-172.31.17.104:22-4.153.228.146:39112.service: Deactivated successfully. Jan 23 00:07:43.468781 systemd-logind[1973]: Session 18 logged out. Waiting for processes to exit. Jan 23 00:07:43.476270 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 00:07:43.482821 systemd-logind[1973]: Removed session 18. Jan 23 00:07:43.542627 systemd[1]: Started sshd@18-172.31.17.104:22-4.153.228.146:39120.service - OpenSSH per-connection server daemon (4.153.228.146:39120). Jan 23 00:07:44.068785 sshd[5004]: Accepted publickey for core from 4.153.228.146 port 39120 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:44.070427 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:44.079541 systemd-logind[1973]: New session 19 of user core. Jan 23 00:07:44.085450 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 00:07:44.803178 sshd[5007]: Connection closed by 4.153.228.146 port 39120 Jan 23 00:07:44.803027 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:44.812638 systemd[1]: sshd@18-172.31.17.104:22-4.153.228.146:39120.service: Deactivated successfully. Jan 23 00:07:44.817798 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 00:07:44.821326 systemd-logind[1973]: Session 19 logged out. Waiting for processes to exit. Jan 23 00:07:44.824324 systemd-logind[1973]: Removed session 19. Jan 23 00:07:44.895856 systemd[1]: Started sshd@19-172.31.17.104:22-4.153.228.146:36976.service - OpenSSH per-connection server daemon (4.153.228.146:36976). Jan 23 00:07:45.419919 sshd[5017]: Accepted publickey for core from 4.153.228.146 port 36976 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:45.422881 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:45.431449 systemd-logind[1973]: New session 20 of user core. Jan 23 00:07:45.444361 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 00:07:45.914151 sshd[5020]: Connection closed by 4.153.228.146 port 36976 Jan 23 00:07:45.915642 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:45.925730 systemd-logind[1973]: Session 20 logged out. Waiting for processes to exit. Jan 23 00:07:45.926971 systemd[1]: sshd@19-172.31.17.104:22-4.153.228.146:36976.service: Deactivated successfully. Jan 23 00:07:45.932337 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 00:07:45.936831 systemd-logind[1973]: Removed session 20. Jan 23 00:07:51.010553 systemd[1]: Started sshd@20-172.31.17.104:22-4.153.228.146:36984.service - OpenSSH per-connection server daemon (4.153.228.146:36984). Jan 23 00:07:51.527236 sshd[5036]: Accepted publickey for core from 4.153.228.146 port 36984 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:51.532049 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:51.540968 systemd-logind[1973]: New session 21 of user core. Jan 23 00:07:51.548376 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 00:07:51.992629 sshd[5039]: Connection closed by 4.153.228.146 port 36984 Jan 23 00:07:51.993523 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:52.002616 systemd-logind[1973]: Session 21 logged out. Waiting for processes to exit. Jan 23 00:07:52.004029 systemd[1]: sshd@20-172.31.17.104:22-4.153.228.146:36984.service: Deactivated successfully. Jan 23 00:07:52.009942 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 00:07:52.014851 systemd-logind[1973]: Removed session 21. Jan 23 00:07:57.099095 systemd[1]: Started sshd@21-172.31.17.104:22-4.153.228.146:55746.service - OpenSSH per-connection server daemon (4.153.228.146:55746). Jan 23 00:07:57.667900 sshd[5050]: Accepted publickey for core from 4.153.228.146 port 55746 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:07:57.670356 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:57.680207 systemd-logind[1973]: New session 22 of user core. Jan 23 00:07:57.685401 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 00:07:58.165616 sshd[5053]: Connection closed by 4.153.228.146 port 55746 Jan 23 00:07:58.166576 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:58.175801 systemd[1]: sshd@21-172.31.17.104:22-4.153.228.146:55746.service: Deactivated successfully. Jan 23 00:07:58.179883 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 00:07:58.181842 systemd-logind[1973]: Session 22 logged out. Waiting for processes to exit. Jan 23 00:07:58.184969 systemd-logind[1973]: Removed session 22. Jan 23 00:08:03.277286 systemd[1]: Started sshd@22-172.31.17.104:22-4.153.228.146:55760.service - OpenSSH per-connection server daemon (4.153.228.146:55760). Jan 23 00:08:03.837204 sshd[5064]: Accepted publickey for core from 4.153.228.146 port 55760 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:03.838908 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:03.849913 systemd-logind[1973]: New session 23 of user core. Jan 23 00:08:03.857403 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 00:08:04.330147 sshd[5067]: Connection closed by 4.153.228.146 port 55760 Jan 23 00:08:04.329289 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:04.337429 systemd[1]: sshd@22-172.31.17.104:22-4.153.228.146:55760.service: Deactivated successfully. Jan 23 00:08:04.340910 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 00:08:04.343888 systemd-logind[1973]: Session 23 logged out. Waiting for processes to exit. Jan 23 00:08:04.347203 systemd-logind[1973]: Removed session 23. Jan 23 00:08:04.426678 systemd[1]: Started sshd@23-172.31.17.104:22-4.153.228.146:55774.service - OpenSSH per-connection server daemon (4.153.228.146:55774). Jan 23 00:08:04.979634 sshd[5078]: Accepted publickey for core from 4.153.228.146 port 55774 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:04.981767 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:04.989640 systemd-logind[1973]: New session 24 of user core. Jan 23 00:08:04.998383 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 00:08:07.265786 containerd[2007]: time="2026-01-23T00:08:07.265523747Z" level=info msg="StopContainer for \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" with timeout 30 (s)" Jan 23 00:08:07.267449 containerd[2007]: time="2026-01-23T00:08:07.267083879Z" level=info msg="Stop container \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" with signal terminated" Jan 23 00:08:07.316300 systemd[1]: cri-containerd-72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9.scope: Deactivated successfully. Jan 23 00:08:07.320509 containerd[2007]: time="2026-01-23T00:08:07.320275175Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:08:07.325927 containerd[2007]: time="2026-01-23T00:08:07.325725875Z" level=info msg="received container exit event container_id:\"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" id:\"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" pid:4117 exited_at:{seconds:1769126887 nanos:323936135}" Jan 23 00:08:07.343047 containerd[2007]: time="2026-01-23T00:08:07.342961920Z" level=info msg="StopContainer for \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" with timeout 2 (s)" Jan 23 00:08:07.344026 containerd[2007]: time="2026-01-23T00:08:07.343951560Z" level=info msg="Stop container \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" with signal terminated" Jan 23 00:08:07.383845 systemd-networkd[1817]: lxc_health: Link DOWN Jan 23 00:08:07.384216 systemd-networkd[1817]: lxc_health: Lost carrier Jan 23 00:08:07.413627 systemd[1]: cri-containerd-381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18.scope: Deactivated successfully. Jan 23 00:08:07.414706 systemd[1]: cri-containerd-381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18.scope: Consumed 14.541s CPU time, 122.9M memory peak, 128K read from disk, 12.9M written to disk. Jan 23 00:08:07.423530 containerd[2007]: time="2026-01-23T00:08:07.423415620Z" level=info msg="received container exit event container_id:\"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" id:\"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" pid:4150 exited_at:{seconds:1769126887 nanos:422338416}" Jan 23 00:08:07.437237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9-rootfs.mount: Deactivated successfully. Jan 23 00:08:07.453027 containerd[2007]: time="2026-01-23T00:08:07.452950620Z" level=info msg="StopContainer for \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" returns successfully" Jan 23 00:08:07.455298 containerd[2007]: time="2026-01-23T00:08:07.455235816Z" level=info msg="StopPodSandbox for \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\"" Jan 23 00:08:07.455779 containerd[2007]: time="2026-01-23T00:08:07.455693700Z" level=info msg="Container to stop \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:08:07.479658 systemd[1]: cri-containerd-2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6.scope: Deactivated successfully. Jan 23 00:08:07.489010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18-rootfs.mount: Deactivated successfully. Jan 23 00:08:07.495179 containerd[2007]: time="2026-01-23T00:08:07.494700180Z" level=info msg="received sandbox exit event container_id:\"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" id:\"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" exit_status:137 exited_at:{seconds:1769126887 nanos:492249744}" monitor_name=podsandbox Jan 23 00:08:07.506862 containerd[2007]: time="2026-01-23T00:08:07.506589216Z" level=info msg="StopContainer for \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" returns successfully" Jan 23 00:08:07.507594 containerd[2007]: time="2026-01-23T00:08:07.507535596Z" level=info msg="StopPodSandbox for \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\"" Jan 23 00:08:07.507729 containerd[2007]: time="2026-01-23T00:08:07.507643956Z" level=info msg="Container to stop \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:08:07.507729 containerd[2007]: time="2026-01-23T00:08:07.507671940Z" level=info msg="Container to stop \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:08:07.507729 containerd[2007]: time="2026-01-23T00:08:07.507693600Z" level=info msg="Container to stop \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:08:07.507729 containerd[2007]: time="2026-01-23T00:08:07.507715188Z" level=info msg="Container to stop \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:08:07.508049 containerd[2007]: time="2026-01-23T00:08:07.507735144Z" level=info msg="Container to stop \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 00:08:07.533028 systemd[1]: cri-containerd-37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3.scope: Deactivated successfully. Jan 23 00:08:07.540713 containerd[2007]: time="2026-01-23T00:08:07.540638520Z" level=info msg="received sandbox exit event container_id:\"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" id:\"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" exit_status:137 exited_at:{seconds:1769126887 nanos:537680688}" monitor_name=podsandbox Jan 23 00:08:07.576075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6-rootfs.mount: Deactivated successfully. Jan 23 00:08:07.580131 containerd[2007]: time="2026-01-23T00:08:07.579995869Z" level=info msg="shim disconnected" id=2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6 namespace=k8s.io Jan 23 00:08:07.581265 containerd[2007]: time="2026-01-23T00:08:07.580062997Z" level=warning msg="cleaning up after shim disconnected" id=2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6 namespace=k8s.io Jan 23 00:08:07.581646 containerd[2007]: time="2026-01-23T00:08:07.581210869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 00:08:07.599599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3-rootfs.mount: Deactivated successfully. Jan 23 00:08:07.604167 containerd[2007]: time="2026-01-23T00:08:07.603165493Z" level=info msg="shim disconnected" id=37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3 namespace=k8s.io Jan 23 00:08:07.604167 containerd[2007]: time="2026-01-23T00:08:07.603223201Z" level=warning msg="cleaning up after shim disconnected" id=37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3 namespace=k8s.io Jan 23 00:08:07.604167 containerd[2007]: time="2026-01-23T00:08:07.603274117Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 00:08:07.620132 containerd[2007]: time="2026-01-23T00:08:07.619245913Z" level=info msg="received sandbox container exit event sandbox_id:\"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" exit_status:137 exited_at:{seconds:1769126887 nanos:492249744}" monitor_name=criService Jan 23 00:08:07.620450 containerd[2007]: time="2026-01-23T00:08:07.620373073Z" level=info msg="TearDown network for sandbox \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" successfully" Jan 23 00:08:07.620528 containerd[2007]: time="2026-01-23T00:08:07.620445985Z" level=info msg="StopPodSandbox for \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" returns successfully" Jan 23 00:08:07.623198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6-shm.mount: Deactivated successfully. Jan 23 00:08:07.648206 containerd[2007]: time="2026-01-23T00:08:07.646900453Z" level=info msg="TearDown network for sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" successfully" Jan 23 00:08:07.648206 containerd[2007]: time="2026-01-23T00:08:07.646954633Z" level=info msg="StopPodSandbox for \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" returns successfully" Jan 23 00:08:07.648206 containerd[2007]: time="2026-01-23T00:08:07.647118745Z" level=info msg="received sandbox container exit event sandbox_id:\"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" exit_status:137 exited_at:{seconds:1769126887 nanos:537680688}" monitor_name=criService Jan 23 00:08:07.749447 kubelet[3320]: I0123 00:08:07.749345 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-lib-modules\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.749447 kubelet[3320]: I0123 00:08:07.749418 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-hostproc\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.750085 kubelet[3320]: I0123 00:08:07.749456 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-bpf-maps\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.750085 kubelet[3320]: I0123 00:08:07.749508 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc-cilium-config-path\") pod \"c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc\" (UID: \"c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc\") " Jan 23 00:08:07.750085 kubelet[3320]: I0123 00:08:07.749548 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-config-path\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.750085 kubelet[3320]: I0123 00:08:07.749584 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-cgroup\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.750085 kubelet[3320]: I0123 00:08:07.749615 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-run\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.750085 kubelet[3320]: I0123 00:08:07.749653 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd22m\" (UniqueName: \"kubernetes.io/projected/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-kube-api-access-gd22m\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.751616 kubelet[3320]: I0123 00:08:07.749772 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-host-proc-sys-net\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.751616 kubelet[3320]: I0123 00:08:07.749809 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cni-path\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.751616 kubelet[3320]: I0123 00:08:07.749846 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-clustermesh-secrets\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.751616 kubelet[3320]: I0123 00:08:07.749883 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26ls7\" (UniqueName: \"kubernetes.io/projected/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc-kube-api-access-26ls7\") pod \"c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc\" (UID: \"c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc\") " Jan 23 00:08:07.751616 kubelet[3320]: I0123 00:08:07.749952 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-xtables-lock\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.751616 kubelet[3320]: I0123 00:08:07.749986 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-host-proc-sys-kernel\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.751916 kubelet[3320]: I0123 00:08:07.750022 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-etc-cni-netd\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.751916 kubelet[3320]: I0123 00:08:07.750060 3320 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-hubble-tls\") pod \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\" (UID: \"38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91\") " Jan 23 00:08:07.751916 kubelet[3320]: I0123 00:08:07.750169 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.752559 kubelet[3320]: I0123 00:08:07.752352 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-hostproc" (OuterVolumeSpecName: "hostproc") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.752559 kubelet[3320]: I0123 00:08:07.752474 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.753249 kubelet[3320]: I0123 00:08:07.753098 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.753249 kubelet[3320]: I0123 00:08:07.753209 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cni-path" (OuterVolumeSpecName: "cni-path") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.754958 kubelet[3320]: I0123 00:08:07.754895 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.755207 kubelet[3320]: I0123 00:08:07.755177 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.762442 kubelet[3320]: I0123 00:08:07.757026 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.762442 kubelet[3320]: I0123 00:08:07.757088 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.762640 kubelet[3320]: I0123 00:08:07.762571 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:08:07.762712 kubelet[3320]: I0123 00:08:07.762661 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 00:08:07.763180 kubelet[3320]: I0123 00:08:07.762795 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 00:08:07.765075 kubelet[3320]: I0123 00:08:07.765002 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-kube-api-access-gd22m" (OuterVolumeSpecName: "kube-api-access-gd22m") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "kube-api-access-gd22m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:08:07.765619 kubelet[3320]: I0123 00:08:07.765510 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" (UID: "38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:08:07.770595 kubelet[3320]: I0123 00:08:07.770522 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc-kube-api-access-26ls7" (OuterVolumeSpecName: "kube-api-access-26ls7") pod "c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc" (UID: "c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc"). InnerVolumeSpecName "kube-api-access-26ls7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:08:07.773618 kubelet[3320]: I0123 00:08:07.773541 3320 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc" (UID: "c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:08:07.850875 kubelet[3320]: I0123 00:08:07.850699 3320 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc-cilium-config-path\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.850875 kubelet[3320]: I0123 00:08:07.850751 3320 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-config-path\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.851423 kubelet[3320]: I0123 00:08:07.851088 3320 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-cgroup\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.851423 kubelet[3320]: I0123 00:08:07.851151 3320 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cilium-run\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.851423 kubelet[3320]: I0123 00:08:07.851175 3320 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gd22m\" (UniqueName: \"kubernetes.io/projected/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-kube-api-access-gd22m\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.851423 kubelet[3320]: I0123 00:08:07.851241 3320 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-host-proc-sys-net\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.851423 kubelet[3320]: I0123 00:08:07.851271 3320 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-xtables-lock\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.851423 kubelet[3320]: I0123 00:08:07.851321 3320 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-cni-path\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.851423 kubelet[3320]: I0123 00:08:07.851344 3320 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-clustermesh-secrets\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.852367 kubelet[3320]: I0123 00:08:07.851366 3320 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26ls7\" (UniqueName: \"kubernetes.io/projected/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc-kube-api-access-26ls7\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.852367 kubelet[3320]: I0123 00:08:07.852016 3320 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-host-proc-sys-kernel\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.852367 kubelet[3320]: I0123 00:08:07.852078 3320 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-etc-cni-netd\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.852367 kubelet[3320]: I0123 00:08:07.852184 3320 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-hubble-tls\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.852367 kubelet[3320]: I0123 00:08:07.852209 3320 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-lib-modules\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.852367 kubelet[3320]: I0123 00:08:07.852253 3320 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-hostproc\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.852367 kubelet[3320]: I0123 00:08:07.852305 3320 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91-bpf-maps\") on node \"ip-172-31-17-104\" DevicePath \"\"" Jan 23 00:08:07.964346 kubelet[3320]: E0123 00:08:07.964277 3320 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 00:08:08.153174 kubelet[3320]: I0123 00:08:08.152963 3320 scope.go:117] "RemoveContainer" containerID="72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9" Jan 23 00:08:08.162781 containerd[2007]: time="2026-01-23T00:08:08.162712152Z" level=info msg="RemoveContainer for \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\"" Jan 23 00:08:08.181005 systemd[1]: Removed slice kubepods-besteffort-podc33a51f7_5d6f_4d5d_aeee_c9d8bd6575dc.slice - libcontainer container kubepods-besteffort-podc33a51f7_5d6f_4d5d_aeee_c9d8bd6575dc.slice. Jan 23 00:08:08.184371 containerd[2007]: time="2026-01-23T00:08:08.184304904Z" level=info msg="RemoveContainer for \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" returns successfully" Jan 23 00:08:08.187330 kubelet[3320]: I0123 00:08:08.187224 3320 scope.go:117] "RemoveContainer" containerID="72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9" Jan 23 00:08:08.188328 containerd[2007]: time="2026-01-23T00:08:08.187953744Z" level=error msg="ContainerStatus for \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\": not found" Jan 23 00:08:08.188471 kubelet[3320]: E0123 00:08:08.188247 3320 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\": not found" containerID="72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9" Jan 23 00:08:08.188471 kubelet[3320]: I0123 00:08:08.188296 3320 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9"} err="failed to get container status \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\": rpc error: code = NotFound desc = an error occurred when try to find container \"72eacc09b2290c7b77fc86dbdc255ed70eecacaae68d8af6c07e977c94a19db9\": not found" Jan 23 00:08:08.188471 kubelet[3320]: I0123 00:08:08.188411 3320 scope.go:117] "RemoveContainer" containerID="381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18" Jan 23 00:08:08.196331 containerd[2007]: time="2026-01-23T00:08:08.196233048Z" level=info msg="RemoveContainer for \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\"" Jan 23 00:08:08.207085 systemd[1]: Removed slice kubepods-burstable-pod38f2e9b7_9f80_49bf_b70a_ba7f92f0ab91.slice - libcontainer container kubepods-burstable-pod38f2e9b7_9f80_49bf_b70a_ba7f92f0ab91.slice. Jan 23 00:08:08.207562 systemd[1]: kubepods-burstable-pod38f2e9b7_9f80_49bf_b70a_ba7f92f0ab91.slice: Consumed 14.736s CPU time, 123.4M memory peak, 128K read from disk, 12.9M written to disk. Jan 23 00:08:08.210760 containerd[2007]: time="2026-01-23T00:08:08.210653808Z" level=info msg="RemoveContainer for \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" returns successfully" Jan 23 00:08:08.211508 kubelet[3320]: I0123 00:08:08.211139 3320 scope.go:117] "RemoveContainer" containerID="ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef" Jan 23 00:08:08.214478 containerd[2007]: time="2026-01-23T00:08:08.214425060Z" level=info msg="RemoveContainer for \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\"" Jan 23 00:08:08.227830 containerd[2007]: time="2026-01-23T00:08:08.224094480Z" level=info msg="RemoveContainer for \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\" returns successfully" Jan 23 00:08:08.229815 kubelet[3320]: I0123 00:08:08.229634 3320 scope.go:117] "RemoveContainer" containerID="5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de" Jan 23 00:08:08.240376 containerd[2007]: time="2026-01-23T00:08:08.240321852Z" level=info msg="RemoveContainer for \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\"" Jan 23 00:08:08.253295 containerd[2007]: time="2026-01-23T00:08:08.252396456Z" level=info msg="RemoveContainer for \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\" returns successfully" Jan 23 00:08:08.253496 kubelet[3320]: I0123 00:08:08.253084 3320 scope.go:117] "RemoveContainer" containerID="3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303" Jan 23 00:08:08.256914 containerd[2007]: time="2026-01-23T00:08:08.256816896Z" level=info msg="RemoveContainer for \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\"" Jan 23 00:08:08.262045 containerd[2007]: time="2026-01-23T00:08:08.261934596Z" level=info msg="RemoveContainer for \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\" returns successfully" Jan 23 00:08:08.262521 kubelet[3320]: I0123 00:08:08.262324 3320 scope.go:117] "RemoveContainer" containerID="24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e" Jan 23 00:08:08.265286 containerd[2007]: time="2026-01-23T00:08:08.265240380Z" level=info msg="RemoveContainer for \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\"" Jan 23 00:08:08.270212 containerd[2007]: time="2026-01-23T00:08:08.270163068Z" level=info msg="RemoveContainer for \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\" returns successfully" Jan 23 00:08:08.270898 kubelet[3320]: I0123 00:08:08.270825 3320 scope.go:117] "RemoveContainer" containerID="381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18" Jan 23 00:08:08.271510 containerd[2007]: time="2026-01-23T00:08:08.271406388Z" level=error msg="ContainerStatus for \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\": not found" Jan 23 00:08:08.271685 kubelet[3320]: E0123 00:08:08.271643 3320 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\": not found" containerID="381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18" Jan 23 00:08:08.271773 kubelet[3320]: I0123 00:08:08.271695 3320 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18"} err="failed to get container status \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\": rpc error: code = NotFound desc = an error occurred when try to find container \"381a56dda5873c639b29838cf284e26222b6aa44f7d880ef58c306386f354b18\": not found" Jan 23 00:08:08.271773 kubelet[3320]: I0123 00:08:08.271732 3320 scope.go:117] "RemoveContainer" containerID="ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef" Jan 23 00:08:08.272080 containerd[2007]: time="2026-01-23T00:08:08.272028384Z" level=error msg="ContainerStatus for \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\": not found" Jan 23 00:08:08.272539 kubelet[3320]: E0123 00:08:08.272310 3320 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\": not found" containerID="ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef" Jan 23 00:08:08.272539 kubelet[3320]: I0123 00:08:08.272351 3320 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef"} err="failed to get container status \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"ecefb24da6bb453e6b76971415e3161910002bd5130e2a8dcd88d736dea534ef\": not found" Jan 23 00:08:08.272539 kubelet[3320]: I0123 00:08:08.272385 3320 scope.go:117] "RemoveContainer" containerID="5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de" Jan 23 00:08:08.272726 containerd[2007]: time="2026-01-23T00:08:08.272644080Z" level=error msg="ContainerStatus for \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\": not found" Jan 23 00:08:08.273158 kubelet[3320]: E0123 00:08:08.273046 3320 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\": not found" containerID="5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de" Jan 23 00:08:08.273325 kubelet[3320]: I0123 00:08:08.273250 3320 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de"} err="failed to get container status \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\": rpc error: code = NotFound desc = an error occurred when try to find container \"5200348e2aa01e010d61763b260fc308e50b2652eca2fd7a6c8d2edcc18fb0de\": not found" Jan 23 00:08:08.273510 kubelet[3320]: I0123 00:08:08.273288 3320 scope.go:117] "RemoveContainer" containerID="3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303" Jan 23 00:08:08.273997 containerd[2007]: time="2026-01-23T00:08:08.273945540Z" level=error msg="ContainerStatus for \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\": not found" Jan 23 00:08:08.274438 kubelet[3320]: E0123 00:08:08.274248 3320 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\": not found" containerID="3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303" Jan 23 00:08:08.274438 kubelet[3320]: I0123 00:08:08.274291 3320 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303"} err="failed to get container status \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a6bc5d51b111ac9bffa9c71e9150330317b4a9473287918679337130cc52303\": not found" Jan 23 00:08:08.274438 kubelet[3320]: I0123 00:08:08.274321 3320 scope.go:117] "RemoveContainer" containerID="24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e" Jan 23 00:08:08.274648 containerd[2007]: time="2026-01-23T00:08:08.274592952Z" level=error msg="ContainerStatus for \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\": not found" Jan 23 00:08:08.274997 kubelet[3320]: E0123 00:08:08.274897 3320 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\": not found" containerID="24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e" Jan 23 00:08:08.274997 kubelet[3320]: I0123 00:08:08.274964 3320 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e"} err="failed to get container status \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"24e4fd5abd5bf83aa1e766cac249fb47e9fa967a9f50aeec466ac81278a10c0e\": not found" Jan 23 00:08:08.433227 systemd[1]: var-lib-kubelet-pods-c33a51f7\x2d5d6f\x2d4d5d\x2daeee\x2dc9d8bd6575dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d26ls7.mount: Deactivated successfully. Jan 23 00:08:08.433917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3-shm.mount: Deactivated successfully. Jan 23 00:08:08.434052 systemd[1]: var-lib-kubelet-pods-38f2e9b7\x2d9f80\x2d49bf\x2db70a\x2dba7f92f0ab91-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgd22m.mount: Deactivated successfully. Jan 23 00:08:08.434230 systemd[1]: var-lib-kubelet-pods-38f2e9b7\x2d9f80\x2d49bf\x2db70a\x2dba7f92f0ab91-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 00:08:08.434468 systemd[1]: var-lib-kubelet-pods-38f2e9b7\x2d9f80\x2d49bf\x2db70a\x2dba7f92f0ab91-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 00:08:08.489542 kubelet[3320]: I0123 00:08:08.489479 3320 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" path="/var/lib/kubelet/pods/38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91/volumes" Jan 23 00:08:08.490910 kubelet[3320]: I0123 00:08:08.490853 3320 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc" path="/var/lib/kubelet/pods/c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc/volumes" Jan 23 00:08:09.234889 sshd[5081]: Connection closed by 4.153.228.146 port 55774 Jan 23 00:08:09.235402 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:09.244455 systemd-logind[1973]: Session 24 logged out. Waiting for processes to exit. Jan 23 00:08:09.245093 systemd[1]: sshd@23-172.31.17.104:22-4.153.228.146:55774.service: Deactivated successfully. Jan 23 00:08:09.248869 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 00:08:09.249688 systemd[1]: session-24.scope: Consumed 1.318s CPU time, 23.5M memory peak. Jan 23 00:08:09.253249 systemd-logind[1973]: Removed session 24. Jan 23 00:08:09.323530 systemd[1]: Started sshd@24-172.31.17.104:22-4.153.228.146:55518.service - OpenSSH per-connection server daemon (4.153.228.146:55518). Jan 23 00:08:09.844155 sshd[5228]: Accepted publickey for core from 4.153.228.146 port 55518 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:09.845915 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:09.855203 systemd-logind[1973]: New session 25 of user core. Jan 23 00:08:09.861408 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 00:08:10.219558 ntpd[2206]: Deleting 10 lxc_health, [fe80::dce3:50ff:fec8:54ca%8]:123, stats: received=0, sent=0, dropped=0, active_time=81 secs Jan 23 00:08:10.221284 ntpd[2206]: 23 Jan 00:08:10 ntpd[2206]: Deleting 10 lxc_health, [fe80::dce3:50ff:fec8:54ca%8]:123, stats: received=0, sent=0, dropped=0, active_time=81 secs Jan 23 00:08:12.280156 sshd[5231]: Connection closed by 4.153.228.146 port 55518 Jan 23 00:08:12.282639 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:12.290380 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 00:08:12.291599 systemd[1]: session-25.scope: Consumed 1.933s CPU time, 25.7M memory peak. Jan 23 00:08:12.292466 systemd[1]: sshd@24-172.31.17.104:22-4.153.228.146:55518.service: Deactivated successfully. Jan 23 00:08:12.304775 systemd-logind[1973]: Session 25 logged out. Waiting for processes to exit. Jan 23 00:08:12.309950 kubelet[3320]: I0123 00:08:12.309884 3320 memory_manager.go:355] "RemoveStaleState removing state" podUID="c33a51f7-5d6f-4d5d-aeee-c9d8bd6575dc" containerName="cilium-operator" Jan 23 00:08:12.309950 kubelet[3320]: I0123 00:08:12.309930 3320 memory_manager.go:355] "RemoveStaleState removing state" podUID="38f2e9b7-9f80-49bf-b70a-ba7f92f0ab91" containerName="cilium-agent" Jan 23 00:08:12.312279 systemd-logind[1973]: Removed session 25. Jan 23 00:08:12.333816 systemd[1]: Created slice kubepods-burstable-pod0566a5c3_7ee9_4357_ad8f_de7b8b3d9a37.slice - libcontainer container kubepods-burstable-pod0566a5c3_7ee9_4357_ad8f_de7b8b3d9a37.slice. Jan 23 00:08:12.377163 systemd[1]: Started sshd@25-172.31.17.104:22-4.153.228.146:55530.service - OpenSSH per-connection server daemon (4.153.228.146:55530). Jan 23 00:08:12.383449 kubelet[3320]: I0123 00:08:12.382801 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-cilium-run\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.385138 kubelet[3320]: I0123 00:08:12.383783 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-cilium-cgroup\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.385138 kubelet[3320]: I0123 00:08:12.383845 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-xtables-lock\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.385138 kubelet[3320]: I0123 00:08:12.383882 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-clustermesh-secrets\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.385138 kubelet[3320]: I0123 00:08:12.383924 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-hubble-tls\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.385138 kubelet[3320]: I0123 00:08:12.383965 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-cilium-config-path\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.385138 kubelet[3320]: I0123 00:08:12.384002 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-host-proc-sys-kernel\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.385562 kubelet[3320]: I0123 00:08:12.384037 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-hostproc\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.386747 kubelet[3320]: I0123 00:08:12.384090 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-cilium-ipsec-secrets\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.386747 kubelet[3320]: I0123 00:08:12.386247 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-host-proc-sys-net\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.386747 kubelet[3320]: I0123 00:08:12.386300 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-etc-cni-netd\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.386747 kubelet[3320]: I0123 00:08:12.386353 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-bpf-maps\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.386747 kubelet[3320]: I0123 00:08:12.386392 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-lib-modules\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.386747 kubelet[3320]: I0123 00:08:12.386427 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ksst\" (UniqueName: \"kubernetes.io/projected/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-kube-api-access-8ksst\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.387287 kubelet[3320]: I0123 00:08:12.386472 3320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37-cni-path\") pod \"cilium-tgbkc\" (UID: \"0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37\") " pod="kube-system/cilium-tgbkc" Jan 23 00:08:12.458177 containerd[2007]: time="2026-01-23T00:08:12.458080265Z" level=info msg="StopPodSandbox for \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\"" Jan 23 00:08:12.459203 containerd[2007]: time="2026-01-23T00:08:12.458299505Z" level=info msg="TearDown network for sandbox \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" successfully" Jan 23 00:08:12.459203 containerd[2007]: time="2026-01-23T00:08:12.458325641Z" level=info msg="StopPodSandbox for \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" returns successfully" Jan 23 00:08:12.459739 containerd[2007]: time="2026-01-23T00:08:12.459689177Z" level=info msg="RemovePodSandbox for \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\"" Jan 23 00:08:12.459878 containerd[2007]: time="2026-01-23T00:08:12.459852485Z" level=info msg="Forcibly stopping sandbox \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\"" Jan 23 00:08:12.461187 containerd[2007]: time="2026-01-23T00:08:12.460995389Z" level=info msg="TearDown network for sandbox \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" successfully" Jan 23 00:08:12.465472 containerd[2007]: time="2026-01-23T00:08:12.465413429Z" level=info msg="Ensure that sandbox 2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6 in task-service has been cleanup successfully" Jan 23 00:08:12.479068 containerd[2007]: time="2026-01-23T00:08:12.478957457Z" level=info msg="RemovePodSandbox \"2f3e297c8fd44665709d230c2ec631f04987ff7e7cfdcdf4ea1ba8567045e1c6\" returns successfully" Jan 23 00:08:12.480204 containerd[2007]: time="2026-01-23T00:08:12.480139817Z" level=info msg="StopPodSandbox for \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\"" Jan 23 00:08:12.480416 containerd[2007]: time="2026-01-23T00:08:12.480377405Z" level=info msg="TearDown network for sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" successfully" Jan 23 00:08:12.480474 containerd[2007]: time="2026-01-23T00:08:12.480414449Z" level=info msg="StopPodSandbox for \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" returns successfully" Jan 23 00:08:12.483171 containerd[2007]: time="2026-01-23T00:08:12.481680257Z" level=info msg="RemovePodSandbox for \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\"" Jan 23 00:08:12.483171 containerd[2007]: time="2026-01-23T00:08:12.481757873Z" level=info msg="Forcibly stopping sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\"" Jan 23 00:08:12.483171 containerd[2007]: time="2026-01-23T00:08:12.481933745Z" level=info msg="TearDown network for sandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" successfully" Jan 23 00:08:12.484616 containerd[2007]: time="2026-01-23T00:08:12.484538033Z" level=info msg="Ensure that sandbox 37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3 in task-service has been cleanup successfully" Jan 23 00:08:12.495398 containerd[2007]: time="2026-01-23T00:08:12.495321605Z" level=info msg="RemovePodSandbox \"37724345e15172819b29e4395d5b1ab4e9af39f96becb121ef5ba8624df69de3\" returns successfully" Jan 23 00:08:12.647411 containerd[2007]: time="2026-01-23T00:08:12.645972714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tgbkc,Uid:0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:12.671968 containerd[2007]: time="2026-01-23T00:08:12.670228350Z" level=info msg="connecting to shim ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46" address="unix:///run/containerd/s/da14957573415a861daa70b0dd20ac4cad4bb99ff3fd7885296c75383e59390b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:12.713424 systemd[1]: Started cri-containerd-ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46.scope - libcontainer container ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46. Jan 23 00:08:12.769617 containerd[2007]: time="2026-01-23T00:08:12.769523298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tgbkc,Uid:0566a5c3-7ee9-4357-ad8f-de7b8b3d9a37,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\"" Jan 23 00:08:12.778467 containerd[2007]: time="2026-01-23T00:08:12.778415119Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 00:08:12.790569 containerd[2007]: time="2026-01-23T00:08:12.790492699Z" level=info msg="Container 64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:12.798676 containerd[2007]: time="2026-01-23T00:08:12.798563275Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d\"" Jan 23 00:08:12.799943 containerd[2007]: time="2026-01-23T00:08:12.799871575Z" level=info msg="StartContainer for \"64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d\"" Jan 23 00:08:12.801854 containerd[2007]: time="2026-01-23T00:08:12.801798775Z" level=info msg="connecting to shim 64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d" address="unix:///run/containerd/s/da14957573415a861daa70b0dd20ac4cad4bb99ff3fd7885296c75383e59390b" protocol=ttrpc version=3 Jan 23 00:08:12.839393 systemd[1]: Started cri-containerd-64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d.scope - libcontainer container 64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d. Jan 23 00:08:12.899031 containerd[2007]: time="2026-01-23T00:08:12.898770871Z" level=info msg="StartContainer for \"64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d\" returns successfully" Jan 23 00:08:12.912337 sshd[5241]: Accepted publickey for core from 4.153.228.146 port 55530 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:12.915907 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:12.930447 systemd[1]: cri-containerd-64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d.scope: Deactivated successfully. Jan 23 00:08:12.932930 systemd-logind[1973]: New session 26 of user core. Jan 23 00:08:12.937400 containerd[2007]: time="2026-01-23T00:08:12.937350955Z" level=info msg="received container exit event container_id:\"64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d\" id:\"64c42115191c2200b356aa03b81458772cc805f0bd4b8d2613df56c88abc1f2d\" pid:5308 exited_at:{seconds:1769126892 nanos:936305431}" Jan 23 00:08:12.939504 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 00:08:12.966630 kubelet[3320]: E0123 00:08:12.966562 3320 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 00:08:13.214216 containerd[2007]: time="2026-01-23T00:08:13.213995405Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 00:08:13.226780 containerd[2007]: time="2026-01-23T00:08:13.226714553Z" level=info msg="Container e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:13.235623 containerd[2007]: time="2026-01-23T00:08:13.235532813Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9\"" Jan 23 00:08:13.236940 containerd[2007]: time="2026-01-23T00:08:13.236884685Z" level=info msg="StartContainer for \"e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9\"" Jan 23 00:08:13.240659 containerd[2007]: time="2026-01-23T00:08:13.240467609Z" level=info msg="connecting to shim e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9" address="unix:///run/containerd/s/da14957573415a861daa70b0dd20ac4cad4bb99ff3fd7885296c75383e59390b" protocol=ttrpc version=3 Jan 23 00:08:13.260045 sshd[5328]: Connection closed by 4.153.228.146 port 55530 Jan 23 00:08:13.260518 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:13.273328 systemd[1]: sshd@25-172.31.17.104:22-4.153.228.146:55530.service: Deactivated successfully. Jan 23 00:08:13.278324 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 00:08:13.287889 systemd-logind[1973]: Session 26 logged out. Waiting for processes to exit. Jan 23 00:08:13.297507 systemd[1]: Started cri-containerd-e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9.scope - libcontainer container e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9. Jan 23 00:08:13.302341 systemd-logind[1973]: Removed session 26. Jan 23 00:08:13.356636 systemd[1]: Started sshd@26-172.31.17.104:22-4.153.228.146:55546.service - OpenSSH per-connection server daemon (4.153.228.146:55546). Jan 23 00:08:13.371872 containerd[2007]: time="2026-01-23T00:08:13.371391965Z" level=info msg="StartContainer for \"e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9\" returns successfully" Jan 23 00:08:13.386348 systemd[1]: cri-containerd-e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9.scope: Deactivated successfully. Jan 23 00:08:13.391202 containerd[2007]: time="2026-01-23T00:08:13.388967634Z" level=info msg="received container exit event container_id:\"e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9\" id:\"e7ba6779cf7e61806dffa83c0b92b02a67ea59eae89db18822ee965a4a7598d9\" pid:5357 exited_at:{seconds:1769126893 nanos:388082178}" Jan 23 00:08:13.900247 sshd[5371]: Accepted publickey for core from 4.153.228.146 port 55546 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:13.902592 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:13.910757 systemd-logind[1973]: New session 27 of user core. Jan 23 00:08:13.921423 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 00:08:14.222563 containerd[2007]: time="2026-01-23T00:08:14.222432510Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 00:08:14.244140 containerd[2007]: time="2026-01-23T00:08:14.242611182Z" level=info msg="Container 4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:14.284940 containerd[2007]: time="2026-01-23T00:08:14.284347566Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3\"" Jan 23 00:08:14.294577 containerd[2007]: time="2026-01-23T00:08:14.293167986Z" level=info msg="StartContainer for \"4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3\"" Jan 23 00:08:14.305399 containerd[2007]: time="2026-01-23T00:08:14.305314698Z" level=info msg="connecting to shim 4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3" address="unix:///run/containerd/s/da14957573415a861daa70b0dd20ac4cad4bb99ff3fd7885296c75383e59390b" protocol=ttrpc version=3 Jan 23 00:08:14.356814 systemd[1]: Started cri-containerd-4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3.scope - libcontainer container 4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3. Jan 23 00:08:14.484698 containerd[2007]: time="2026-01-23T00:08:14.484362691Z" level=info msg="StartContainer for \"4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3\" returns successfully" Jan 23 00:08:14.484769 systemd[1]: cri-containerd-4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3.scope: Deactivated successfully. Jan 23 00:08:14.498725 containerd[2007]: time="2026-01-23T00:08:14.498655051Z" level=info msg="received container exit event container_id:\"4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3\" id:\"4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3\" pid:5414 exited_at:{seconds:1769126894 nanos:496775059}" Jan 23 00:08:14.546309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e52fc8deba34a81a0caf056b7d476684b65150c709066904df5d2aaaa406be3-rootfs.mount: Deactivated successfully. Jan 23 00:08:15.233504 containerd[2007]: time="2026-01-23T00:08:15.233437459Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 00:08:15.256944 containerd[2007]: time="2026-01-23T00:08:15.256869139Z" level=info msg="Container 2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:15.284241 containerd[2007]: time="2026-01-23T00:08:15.284153971Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7\"" Jan 23 00:08:15.286663 containerd[2007]: time="2026-01-23T00:08:15.286176283Z" level=info msg="StartContainer for \"2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7\"" Jan 23 00:08:15.289339 containerd[2007]: time="2026-01-23T00:08:15.289229743Z" level=info msg="connecting to shim 2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7" address="unix:///run/containerd/s/da14957573415a861daa70b0dd20ac4cad4bb99ff3fd7885296c75383e59390b" protocol=ttrpc version=3 Jan 23 00:08:15.333451 systemd[1]: Started cri-containerd-2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7.scope - libcontainer container 2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7. Jan 23 00:08:15.388924 systemd[1]: cri-containerd-2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7.scope: Deactivated successfully. Jan 23 00:08:15.393131 containerd[2007]: time="2026-01-23T00:08:15.392978191Z" level=info msg="received container exit event container_id:\"2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7\" id:\"2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7\" pid:5456 exited_at:{seconds:1769126895 nanos:392286559}" Jan 23 00:08:15.409524 containerd[2007]: time="2026-01-23T00:08:15.409452524Z" level=info msg="StartContainer for \"2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7\" returns successfully" Jan 23 00:08:15.436839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a6361df8c89680ec3727bc67df1c420b9130f39faec93cef28fecdca533c5b7-rootfs.mount: Deactivated successfully. Jan 23 00:08:15.647851 kubelet[3320]: I0123 00:08:15.647719 3320 setters.go:602] "Node became not ready" node="ip-172-31-17-104" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T00:08:15Z","lastTransitionTime":"2026-01-23T00:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 00:08:16.243970 containerd[2007]: time="2026-01-23T00:08:16.242817344Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 00:08:16.277333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430579311.mount: Deactivated successfully. Jan 23 00:08:16.284537 containerd[2007]: time="2026-01-23T00:08:16.284483348Z" level=info msg="Container 8909197cbd6f5252ae90e5a478880d17a406296935f8099bb7829d7004ac4703: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:16.306032 containerd[2007]: time="2026-01-23T00:08:16.305980196Z" level=info msg="CreateContainer within sandbox \"ff0be07cf57fffccbab0238637503b8bb58e371dd5b665287a8122157a28bf46\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8909197cbd6f5252ae90e5a478880d17a406296935f8099bb7829d7004ac4703\"" Jan 23 00:08:16.308240 containerd[2007]: time="2026-01-23T00:08:16.308190668Z" level=info msg="StartContainer for \"8909197cbd6f5252ae90e5a478880d17a406296935f8099bb7829d7004ac4703\"" Jan 23 00:08:16.310416 containerd[2007]: time="2026-01-23T00:08:16.310314980Z" level=info msg="connecting to shim 8909197cbd6f5252ae90e5a478880d17a406296935f8099bb7829d7004ac4703" address="unix:///run/containerd/s/da14957573415a861daa70b0dd20ac4cad4bb99ff3fd7885296c75383e59390b" protocol=ttrpc version=3 Jan 23 00:08:16.357449 systemd[1]: Started cri-containerd-8909197cbd6f5252ae90e5a478880d17a406296935f8099bb7829d7004ac4703.scope - libcontainer container 8909197cbd6f5252ae90e5a478880d17a406296935f8099bb7829d7004ac4703. Jan 23 00:08:16.453449 containerd[2007]: time="2026-01-23T00:08:16.453398685Z" level=info msg="StartContainer for \"8909197cbd6f5252ae90e5a478880d17a406296935f8099bb7829d7004ac4703\" returns successfully" Jan 23 00:08:17.289755 kubelet[3320]: I0123 00:08:17.289602 3320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tgbkc" podStartSLOduration=5.289552041 podStartE2EDuration="5.289552041s" podCreationTimestamp="2026-01-23 00:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:17.288216405 +0000 UTC m=+125.388355344" watchObservedRunningTime="2026-01-23 00:08:17.289552041 +0000 UTC m=+125.389690968" Jan 23 00:08:17.358165 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 23 00:08:17.484655 kubelet[3320]: E0123 00:08:17.484579 3320 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-m9xqn" podUID="d550c164-6c20-4574-b63a-bb6e37714081" Jan 23 00:08:21.701956 (udev-worker)[6037]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:08:21.704092 systemd-networkd[1817]: lxc_health: Link UP Jan 23 00:08:21.710285 systemd-networkd[1817]: lxc_health: Gained carrier Jan 23 00:08:23.080416 systemd-networkd[1817]: lxc_health: Gained IPv6LL Jan 23 00:08:25.219542 ntpd[2206]: Listen normally on 13 lxc_health [fe80::ec17:d0ff:fe91:b961%14]:123 Jan 23 00:08:25.220782 ntpd[2206]: 23 Jan 00:08:25 ntpd[2206]: Listen normally on 13 lxc_health [fe80::ec17:d0ff:fe91:b961%14]:123 Jan 23 00:08:27.917910 sshd[5392]: Connection closed by 4.153.228.146 port 55546 Jan 23 00:08:27.919087 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:27.927653 systemd[1]: sshd@26-172.31.17.104:22-4.153.228.146:55546.service: Deactivated successfully. Jan 23 00:08:27.933757 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 00:08:27.939599 systemd-logind[1973]: Session 27 logged out. Waiting for processes to exit. Jan 23 00:08:27.942782 systemd-logind[1973]: Removed session 27. Jan 23 00:08:29.843150 update_engine[1974]: I20260123 00:08:29.841225 1974 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 00:08:29.843150 update_engine[1974]: I20260123 00:08:29.841295 1974 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 00:08:29.843150 update_engine[1974]: I20260123 00:08:29.841684 1974 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 00:08:29.845985 update_engine[1974]: I20260123 00:08:29.845896 1974 omaha_request_params.cc:62] Current group set to stable Jan 23 00:08:29.846176 update_engine[1974]: I20260123 00:08:29.846076 1974 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 00:08:29.846414 update_engine[1974]: I20260123 00:08:29.846097 1974 update_attempter.cc:643] Scheduling an action processor start. Jan 23 00:08:29.846478 update_engine[1974]: I20260123 00:08:29.846417 1974 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 00:08:29.846534 update_engine[1974]: I20260123 00:08:29.846486 1974 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 00:08:29.847300 update_engine[1974]: I20260123 00:08:29.846596 1974 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 00:08:29.847300 update_engine[1974]: I20260123 00:08:29.846625 1974 omaha_request_action.cc:272] Request: Jan 23 00:08:29.847300 update_engine[1974]: Jan 23 00:08:29.847300 update_engine[1974]: Jan 23 00:08:29.847300 update_engine[1974]: Jan 23 00:08:29.847300 update_engine[1974]: Jan 23 00:08:29.847300 update_engine[1974]: Jan 23 00:08:29.847300 update_engine[1974]: Jan 23 00:08:29.847300 update_engine[1974]: Jan 23 00:08:29.847300 update_engine[1974]: Jan 23 00:08:29.847300 update_engine[1974]: I20260123 00:08:29.846645 1974 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:08:29.851484 update_engine[1974]: I20260123 00:08:29.851408 1974 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:08:29.852224 locksmithd[2033]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 00:08:29.854141 update_engine[1974]: I20260123 00:08:29.852730 1974 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:08:29.865350 update_engine[1974]: E20260123 00:08:29.865265 1974 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:08:29.865500 update_engine[1974]: I20260123 00:08:29.865410 1974 libcurl_http_fetcher.cc:283] No HTTP response, retry 1