Apr 17 23:33:19.243972 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 17 23:33:19.244039 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 17 22:13:49 -00 2026 Apr 17 23:33:19.244068 kernel: KASLR disabled due to lack of seed Apr 17 23:33:19.244085 kernel: efi: EFI v2.7 by EDK II Apr 17 23:33:19.244102 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Apr 17 23:33:19.244150 kernel: ACPI: Early table checksum verification disabled Apr 17 23:33:19.244170 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 17 23:33:19.244187 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:33:19.244205 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:33:19.244221 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:33:19.244244 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:33:19.244261 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 17 23:33:19.244277 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 17 23:33:19.244294 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 17 23:33:19.244314 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:33:19.244337 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 17 23:33:19.244355 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 17 23:33:19.244372 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 17 23:33:19.244390 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 17 23:33:19.244407 kernel: printk: bootconsole [uart0] enabled Apr 17 23:33:19.244424 kernel: NUMA: Failed to initialise from firmware Apr 17 23:33:19.244442 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 17 23:33:19.244460 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Apr 17 23:33:19.244478 kernel: Zone ranges: Apr 17 23:33:19.244495 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 17 23:33:19.244512 kernel: DMA32 empty Apr 17 23:33:19.244534 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 17 23:33:19.244552 kernel: Movable zone start for each node Apr 17 23:33:19.244569 kernel: Early memory node ranges Apr 17 23:33:19.244586 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 17 23:33:19.244605 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 17 23:33:19.244622 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 17 23:33:19.244641 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 17 23:33:19.244659 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 17 23:33:19.244676 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 17 23:33:19.244693 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 17 23:33:19.244711 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 17 23:33:19.244728 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 17 23:33:19.244750 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 17 23:33:19.244769 kernel: psci: probing for conduit method from ACPI. Apr 17 23:33:19.244793 kernel: psci: PSCIv1.0 detected in firmware. Apr 17 23:33:19.244812 kernel: psci: Using standard PSCI v0.2 function IDs Apr 17 23:33:19.244831 kernel: psci: Trusted OS migration not required Apr 17 23:33:19.244854 kernel: psci: SMC Calling Convention v1.1 Apr 17 23:33:19.244873 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 17 23:33:19.244892 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 17 23:33:19.244910 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 17 23:33:19.244929 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 17 23:33:19.244948 kernel: Detected PIPT I-cache on CPU0 Apr 17 23:33:19.244965 kernel: CPU features: detected: GIC system register CPU interface Apr 17 23:33:19.244983 kernel: CPU features: detected: Spectre-v2 Apr 17 23:33:19.245001 kernel: CPU features: detected: Spectre-v3a Apr 17 23:33:19.245020 kernel: CPU features: detected: Spectre-BHB Apr 17 23:33:19.245038 kernel: CPU features: detected: ARM erratum 1742098 Apr 17 23:33:19.245060 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 17 23:33:19.245078 kernel: alternatives: applying boot alternatives Apr 17 23:33:19.245099 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:33:19.245359 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:33:19.245382 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:33:19.245401 kernel: Fallback order for Node 0: 0 Apr 17 23:33:19.245419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Apr 17 23:33:19.245437 kernel: Policy zone: Normal Apr 17 23:33:19.245456 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:33:19.245474 kernel: software IO TLB: area num 2. Apr 17 23:33:19.245492 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Apr 17 23:33:19.245522 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Apr 17 23:33:19.245541 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:33:19.245559 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:33:19.245578 kernel: rcu: RCU event tracing is enabled. Apr 17 23:33:19.245597 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:33:19.245615 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:33:19.245633 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:33:19.245652 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:33:19.245670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:33:19.245689 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 17 23:33:19.245706 kernel: GICv3: 96 SPIs implemented Apr 17 23:33:19.245729 kernel: GICv3: 0 Extended SPIs implemented Apr 17 23:33:19.245748 kernel: Root IRQ handler: gic_handle_irq Apr 17 23:33:19.245766 kernel: GICv3: GICv3 features: 16 PPIs Apr 17 23:33:19.245785 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 17 23:33:19.245804 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 17 23:33:19.245822 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Apr 17 23:33:19.245841 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Apr 17 23:33:19.245859 kernel: GICv3: using LPI property table @0x00000004000d0000 Apr 17 23:33:19.245877 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 17 23:33:19.245896 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Apr 17 23:33:19.245914 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:33:19.245932 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 17 23:33:19.245956 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 17 23:33:19.245974 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 17 23:33:19.245993 kernel: Console: colour dummy device 80x25 Apr 17 23:33:19.246011 kernel: printk: console [tty1] enabled Apr 17 23:33:19.246030 kernel: ACPI: Core revision 20230628 Apr 17 23:33:19.246049 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 17 23:33:19.246068 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:33:19.246086 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:33:19.246105 kernel: landlock: Up and running. Apr 17 23:33:19.246898 kernel: SELinux: Initializing. Apr 17 23:33:19.246920 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:33:19.246939 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:33:19.246958 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:33:19.246977 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:33:19.246996 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:33:19.247016 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:33:19.247036 kernel: Platform MSI: ITS@0x10080000 domain created Apr 17 23:33:19.247054 kernel: PCI/MSI: ITS@0x10080000 domain created Apr 17 23:33:19.247078 kernel: Remapping and enabling EFI services. Apr 17 23:33:19.247097 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:33:19.247148 kernel: Detected PIPT I-cache on CPU1 Apr 17 23:33:19.247170 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 17 23:33:19.247214 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Apr 17 23:33:19.247240 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 17 23:33:19.247260 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:33:19.247278 kernel: SMP: Total of 2 processors activated. Apr 17 23:33:19.247297 kernel: CPU features: detected: 32-bit EL0 Support Apr 17 23:33:19.247323 kernel: CPU features: detected: 32-bit EL1 Support Apr 17 23:33:19.247342 kernel: CPU features: detected: CRC32 instructions Apr 17 23:33:19.247362 kernel: CPU: All CPU(s) started at EL1 Apr 17 23:33:19.247393 kernel: alternatives: applying system-wide alternatives Apr 17 23:33:19.247418 kernel: devtmpfs: initialized Apr 17 23:33:19.247437 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:33:19.247457 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:33:19.247476 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:33:19.247495 kernel: SMBIOS 3.0.0 present. Apr 17 23:33:19.247520 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 17 23:33:19.247539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:33:19.247559 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 17 23:33:19.247605 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 17 23:33:19.247628 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 17 23:33:19.247648 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:33:19.247668 kernel: audit: type=2000 audit(0.292:1): state=initialized audit_enabled=0 res=1 Apr 17 23:33:19.247687 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:33:19.247713 kernel: cpuidle: using governor menu Apr 17 23:33:19.247732 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 17 23:33:19.247752 kernel: ASID allocator initialised with 65536 entries Apr 17 23:33:19.247771 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:33:19.247790 kernel: Serial: AMBA PL011 UART driver Apr 17 23:33:19.247809 kernel: Modules: 17488 pages in range for non-PLT usage Apr 17 23:33:19.247829 kernel: Modules: 509008 pages in range for PLT usage Apr 17 23:33:19.247848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:33:19.247867 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:33:19.247891 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 17 23:33:19.247912 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 17 23:33:19.247931 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:33:19.247951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:33:19.247972 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 17 23:33:19.248032 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 17 23:33:19.248056 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:33:19.248075 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:33:19.248095 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:33:19.248145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:33:19.248167 kernel: ACPI: Interpreter enabled Apr 17 23:33:19.248188 kernel: ACPI: Using GIC for interrupt routing Apr 17 23:33:19.248207 kernel: ACPI: MCFG table detected, 1 entries Apr 17 23:33:19.248226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 17 23:33:19.248585 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:33:19.251656 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 17 23:33:19.251960 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 17 23:33:19.252339 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 17 23:33:19.252562 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 17 23:33:19.252589 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 17 23:33:19.252610 kernel: acpiphp: Slot [1] registered Apr 17 23:33:19.252630 kernel: acpiphp: Slot [2] registered Apr 17 23:33:19.252650 kernel: acpiphp: Slot [3] registered Apr 17 23:33:19.252669 kernel: acpiphp: Slot [4] registered Apr 17 23:33:19.252688 kernel: acpiphp: Slot [5] registered Apr 17 23:33:19.252719 kernel: acpiphp: Slot [6] registered Apr 17 23:33:19.252740 kernel: acpiphp: Slot [7] registered Apr 17 23:33:19.252761 kernel: acpiphp: Slot [8] registered Apr 17 23:33:19.252782 kernel: acpiphp: Slot [9] registered Apr 17 23:33:19.252802 kernel: acpiphp: Slot [10] registered Apr 17 23:33:19.252822 kernel: acpiphp: Slot [11] registered Apr 17 23:33:19.252842 kernel: acpiphp: Slot [12] registered Apr 17 23:33:19.252863 kernel: acpiphp: Slot [13] registered Apr 17 23:33:19.252883 kernel: acpiphp: Slot [14] registered Apr 17 23:33:19.252902 kernel: acpiphp: Slot [15] registered Apr 17 23:33:19.252928 kernel: acpiphp: Slot [16] registered Apr 17 23:33:19.252948 kernel: acpiphp: Slot [17] registered Apr 17 23:33:19.252968 kernel: acpiphp: Slot [18] registered Apr 17 23:33:19.252988 kernel: acpiphp: Slot [19] registered Apr 17 23:33:19.253007 kernel: acpiphp: Slot [20] registered Apr 17 23:33:19.253027 kernel: acpiphp: Slot [21] registered Apr 17 23:33:19.253046 kernel: acpiphp: Slot [22] registered Apr 17 23:33:19.253065 kernel: acpiphp: Slot [23] registered Apr 17 23:33:19.253084 kernel: acpiphp: Slot [24] registered Apr 17 23:33:19.253143 kernel: acpiphp: Slot [25] registered Apr 17 23:33:19.253383 kernel: acpiphp: Slot [26] registered Apr 17 23:33:19.253404 kernel: acpiphp: Slot [27] registered Apr 17 23:33:19.253424 kernel: acpiphp: Slot [28] registered Apr 17 23:33:19.253444 kernel: acpiphp: Slot [29] registered Apr 17 23:33:19.253463 kernel: acpiphp: Slot [30] registered Apr 17 23:33:19.253482 kernel: acpiphp: Slot [31] registered Apr 17 23:33:19.253501 kernel: PCI host bridge to bus 0000:00 Apr 17 23:33:19.253784 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 17 23:33:19.254001 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 17 23:33:19.254335 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 17 23:33:19.254547 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 17 23:33:19.254807 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Apr 17 23:33:19.255051 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Apr 17 23:33:19.255348 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Apr 17 23:33:19.255606 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:33:19.255830 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Apr 17 23:33:19.256145 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 17 23:33:19.256428 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:33:19.256652 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Apr 17 23:33:19.256910 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Apr 17 23:33:19.257204 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Apr 17 23:33:19.259567 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 17 23:33:19.259800 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 17 23:33:19.259999 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 17 23:33:19.260358 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 17 23:33:19.260394 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 17 23:33:19.260415 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 17 23:33:19.260457 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 17 23:33:19.260480 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 17 23:33:19.260513 kernel: iommu: Default domain type: Translated Apr 17 23:33:19.260534 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 17 23:33:19.260554 kernel: efivars: Registered efivars operations Apr 17 23:33:19.260574 kernel: vgaarb: loaded Apr 17 23:33:19.260594 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 17 23:33:19.260613 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:33:19.260634 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:33:19.260654 kernel: pnp: PnP ACPI init Apr 17 23:33:19.260928 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 17 23:33:19.260965 kernel: pnp: PnP ACPI: found 1 devices Apr 17 23:33:19.260985 kernel: NET: Registered PF_INET protocol family Apr 17 23:33:19.261005 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:33:19.261025 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:33:19.261044 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:33:19.261064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:33:19.261083 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:33:19.261102 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:33:19.261480 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:33:19.261502 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:33:19.261522 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:33:19.261542 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:33:19.261561 kernel: kvm [1]: HYP mode not available Apr 17 23:33:19.261580 kernel: Initialise system trusted keyrings Apr 17 23:33:19.261601 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:33:19.261621 kernel: Key type asymmetric registered Apr 17 23:33:19.261642 kernel: Asymmetric key parser 'x509' registered Apr 17 23:33:19.261667 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 23:33:19.261687 kernel: io scheduler mq-deadline registered Apr 17 23:33:19.261707 kernel: io scheduler kyber registered Apr 17 23:33:19.261726 kernel: io scheduler bfq registered Apr 17 23:33:19.262010 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 17 23:33:19.262042 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 17 23:33:19.262062 kernel: ACPI: button: Power Button [PWRB] Apr 17 23:33:19.262083 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 17 23:33:19.262103 kernel: ACPI: button: Sleep Button [SLPB] Apr 17 23:33:19.262304 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:33:19.262353 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 17 23:33:19.262615 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 17 23:33:19.262644 kernel: printk: console [ttyS0] disabled Apr 17 23:33:19.262665 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 17 23:33:19.262684 kernel: printk: console [ttyS0] enabled Apr 17 23:33:19.262704 kernel: printk: bootconsole [uart0] disabled Apr 17 23:33:19.262723 kernel: thunder_xcv, ver 1.0 Apr 17 23:33:19.262742 kernel: thunder_bgx, ver 1.0 Apr 17 23:33:19.262770 kernel: nicpf, ver 1.0 Apr 17 23:33:19.262790 kernel: nicvf, ver 1.0 Apr 17 23:33:19.263020 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 17 23:33:19.263304 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-17T23:33:18 UTC (1776468798) Apr 17 23:33:19.263336 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:33:19.263357 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Apr 17 23:33:19.263377 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 17 23:33:19.263396 kernel: watchdog: Hard watchdog permanently disabled Apr 17 23:33:19.263426 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:33:19.263446 kernel: Segment Routing with IPv6 Apr 17 23:33:19.263466 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:33:19.263485 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:33:19.263505 kernel: Key type dns_resolver registered Apr 17 23:33:19.263524 kernel: registered taskstats version 1 Apr 17 23:33:19.263543 kernel: Loading compiled-in X.509 certificates Apr 17 23:33:19.263564 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 1161289bfc8d953baa9f687fefeecf0e077bc535' Apr 17 23:33:19.263583 kernel: Key type .fscrypt registered Apr 17 23:33:19.263607 kernel: Key type fscrypt-provisioning registered Apr 17 23:33:19.263626 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:33:19.263645 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:33:19.263664 kernel: ima: No architecture policies found Apr 17 23:33:19.263684 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 17 23:33:19.263703 kernel: clk: Disabling unused clocks Apr 17 23:33:19.263723 kernel: Freeing unused kernel memory: 39424K Apr 17 23:33:19.263742 kernel: Run /init as init process Apr 17 23:33:19.263762 kernel: with arguments: Apr 17 23:33:19.263785 kernel: /init Apr 17 23:33:19.263805 kernel: with environment: Apr 17 23:33:19.263824 kernel: HOME=/ Apr 17 23:33:19.263843 kernel: TERM=linux Apr 17 23:33:19.263868 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:33:19.263892 systemd[1]: Detected virtualization amazon. Apr 17 23:33:19.263914 systemd[1]: Detected architecture arm64. Apr 17 23:33:19.263935 systemd[1]: Running in initrd. Apr 17 23:33:19.263961 systemd[1]: No hostname configured, using default hostname. Apr 17 23:33:19.263982 systemd[1]: Hostname set to . Apr 17 23:33:19.264004 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:33:19.264048 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:33:19.264105 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:33:19.264153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:33:19.264176 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:33:19.264198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:33:19.264228 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:33:19.264250 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:33:19.264275 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:33:19.264296 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:33:19.264318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:33:19.264340 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:33:19.264366 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:33:19.264387 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:33:19.264408 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:33:19.264430 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:33:19.264451 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:33:19.264473 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:33:19.264494 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:33:19.264516 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:33:19.264537 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:33:19.264563 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:33:19.264585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:33:19.264606 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:33:19.264628 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:33:19.264649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:33:19.264671 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:33:19.264692 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:33:19.264713 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:33:19.264734 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:33:19.264761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:33:19.264839 systemd-journald[252]: Collecting audit messages is disabled. Apr 17 23:33:19.264886 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:33:19.264909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:33:19.264974 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:33:19.264997 systemd-journald[252]: Journal started Apr 17 23:33:19.265042 systemd-journald[252]: Runtime Journal (/run/log/journal/ec22705655e62a6546fc6247a42a33cd) is 8.0M, max 75.3M, 67.3M free. Apr 17 23:33:19.228564 systemd-modules-load[253]: Inserted module 'overlay' Apr 17 23:33:19.275631 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:33:19.275680 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:33:19.283180 kernel: Bridge firewalling registered Apr 17 23:33:19.282252 systemd-modules-load[253]: Inserted module 'br_netfilter' Apr 17 23:33:19.288521 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:33:19.305489 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:33:19.316210 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:33:19.326331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:33:19.336657 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:33:19.354559 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:33:19.365453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:33:19.383389 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:33:19.386809 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:33:19.412357 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:33:19.417798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:33:19.433545 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:33:19.456579 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:33:19.472352 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:33:19.504587 dracut-cmdline[289]: dracut-dracut-053 Apr 17 23:33:19.513173 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:33:19.537628 systemd-resolved[282]: Positive Trust Anchors: Apr 17 23:33:19.537667 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:33:19.537730 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:33:19.679153 kernel: SCSI subsystem initialized Apr 17 23:33:19.688135 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:33:19.699144 kernel: iscsi: registered transport (tcp) Apr 17 23:33:19.722143 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:33:19.722216 kernel: QLogic iSCSI HBA Driver Apr 17 23:33:19.790157 kernel: random: crng init done Apr 17 23:33:19.790677 systemd-resolved[282]: Defaulting to hostname 'linux'. Apr 17 23:33:19.794690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:33:19.797307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:33:19.824408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:33:19.835370 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:33:19.881452 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:33:19.881526 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:33:19.883537 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:33:19.950159 kernel: raid6: neonx8 gen() 6511 MB/s Apr 17 23:33:19.967146 kernel: raid6: neonx4 gen() 6427 MB/s Apr 17 23:33:19.984148 kernel: raid6: neonx2 gen() 5336 MB/s Apr 17 23:33:20.002155 kernel: raid6: neonx1 gen() 3937 MB/s Apr 17 23:33:20.019149 kernel: raid6: int64x8 gen() 3780 MB/s Apr 17 23:33:20.037147 kernel: raid6: int64x4 gen() 3665 MB/s Apr 17 23:33:20.054145 kernel: raid6: int64x2 gen() 3549 MB/s Apr 17 23:33:20.072732 kernel: raid6: int64x1 gen() 2765 MB/s Apr 17 23:33:20.072772 kernel: raid6: using algorithm neonx8 gen() 6511 MB/s Apr 17 23:33:20.091527 kernel: raid6: .... xor() 4913 MB/s, rmw enabled Apr 17 23:33:20.091568 kernel: raid6: using neon recovery algorithm Apr 17 23:33:20.100999 kernel: xor: measuring software checksum speed Apr 17 23:33:20.101051 kernel: 8regs : 11035 MB/sec Apr 17 23:33:20.103579 kernel: 32regs : 11133 MB/sec Apr 17 23:33:20.103624 kernel: arm64_neon : 9575 MB/sec Apr 17 23:33:20.103650 kernel: xor: using function: 32regs (11133 MB/sec) Apr 17 23:33:20.189157 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:33:20.208202 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:33:20.223418 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:33:20.258864 systemd-udevd[470]: Using default interface naming scheme 'v255'. Apr 17 23:33:20.266879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:33:20.293836 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:33:20.317652 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Apr 17 23:33:20.378179 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:33:20.390417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:33:20.512778 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:33:20.525399 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:33:20.576672 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:33:20.584379 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:33:20.590000 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:33:20.593287 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:33:20.613442 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:33:20.660973 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:33:20.742572 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 17 23:33:20.742654 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 17 23:33:20.756051 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:33:20.756485 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:33:20.762582 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:33:20.765233 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:33:20.773574 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:33:20.776247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:33:20.779403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:33:20.786150 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:8b:87:81:34:99 Apr 17 23:33:20.788047 (udev-worker)[533]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:33:20.792884 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:33:20.804177 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 17 23:33:20.807548 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:33:20.806602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:33:20.819281 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:33:20.837913 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:33:20.837993 kernel: GPT:9289727 != 33554431 Apr 17 23:33:20.838020 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:33:20.838904 kernel: GPT:9289727 != 33554431 Apr 17 23:33:20.840184 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:33:20.841284 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:33:20.856196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:33:20.868863 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:33:20.920057 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:33:20.943443 kernel: BTRFS: device fsid 6218981f-ef91-4196-be05-d5f6a224b350 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (529) Apr 17 23:33:20.993190 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (525) Apr 17 23:33:21.020994 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:33:21.097327 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:33:21.113995 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:33:21.116973 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:33:21.136598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:33:21.147402 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:33:21.164490 disk-uuid[660]: Primary Header is updated. Apr 17 23:33:21.164490 disk-uuid[660]: Secondary Entries is updated. Apr 17 23:33:21.164490 disk-uuid[660]: Secondary Header is updated. Apr 17 23:33:21.177173 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:33:21.186162 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:33:21.196206 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:33:22.202154 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:33:22.204553 disk-uuid[661]: The operation has completed successfully. Apr 17 23:33:22.396491 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:33:22.396700 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:33:22.455409 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:33:22.475947 sh[1003]: Success Apr 17 23:33:22.503297 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 17 23:33:22.618692 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:33:22.640430 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:33:22.648740 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:33:22.685967 kernel: BTRFS info (device dm-0): first mount of filesystem 6218981f-ef91-4196-be05-d5f6a224b350 Apr 17 23:33:22.686056 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:33:22.688091 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:33:22.688165 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:33:22.689564 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:33:22.739169 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:33:22.742038 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:33:22.746854 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:33:22.756476 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:33:22.770450 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:33:22.807181 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:33:22.807257 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:33:22.808710 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:33:22.828147 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:33:22.847668 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:33:22.852183 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:33:22.861573 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:33:22.879526 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:33:22.994637 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:33:23.007519 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:33:23.095398 systemd-networkd[1199]: lo: Link UP Apr 17 23:33:23.095415 systemd-networkd[1199]: lo: Gained carrier Apr 17 23:33:23.105061 systemd-networkd[1199]: Enumeration completed Apr 17 23:33:23.105313 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:33:23.106786 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:33:23.106794 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:33:23.113523 systemd[1]: Reached target network.target - Network. Apr 17 23:33:23.132897 systemd-networkd[1199]: eth0: Link UP Apr 17 23:33:23.132906 systemd-networkd[1199]: eth0: Gained carrier Apr 17 23:33:23.132926 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:33:23.160260 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.22.159/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:33:23.189482 ignition[1122]: Ignition 2.19.0 Apr 17 23:33:23.189517 ignition[1122]: Stage: fetch-offline Apr 17 23:33:23.193887 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:23.193938 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:33:23.199091 ignition[1122]: Ignition finished successfully Apr 17 23:33:23.203742 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:33:23.215502 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:33:23.250847 ignition[1209]: Ignition 2.19.0 Apr 17 23:33:23.251505 ignition[1209]: Stage: fetch Apr 17 23:33:23.252325 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:23.252352 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:33:23.252507 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:33:23.284231 ignition[1209]: PUT result: OK Apr 17 23:33:23.287816 ignition[1209]: parsed url from cmdline: "" Apr 17 23:33:23.288047 ignition[1209]: no config URL provided Apr 17 23:33:23.288079 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:33:23.288163 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:33:23.288211 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:33:23.295054 ignition[1209]: PUT result: OK Apr 17 23:33:23.296296 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:33:23.303421 ignition[1209]: GET result: OK Apr 17 23:33:23.303721 ignition[1209]: parsing config with SHA512: 89ab5653f82d198ea5c27415605d4bbcbab87a37013323dfdb169b7a0a71d712ea8510e86e4a5ea081853c2b10f83006be87226a0048fecb92f0201c9fa74a2a Apr 17 23:33:23.317862 unknown[1209]: fetched base config from "system" Apr 17 23:33:23.317908 unknown[1209]: fetched base config from "system" Apr 17 23:33:23.317926 unknown[1209]: fetched user config from "aws" Apr 17 23:33:23.321256 ignition[1209]: fetch: fetch complete Apr 17 23:33:23.321270 ignition[1209]: fetch: fetch passed Apr 17 23:33:23.321399 ignition[1209]: Ignition finished successfully Apr 17 23:33:23.330042 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:33:23.343521 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:33:23.376407 ignition[1216]: Ignition 2.19.0 Apr 17 23:33:23.376436 ignition[1216]: Stage: kargs Apr 17 23:33:23.377191 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:23.377222 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:33:23.396474 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:33:23.377396 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:33:23.383632 ignition[1216]: PUT result: OK Apr 17 23:33:23.392722 ignition[1216]: kargs: kargs passed Apr 17 23:33:23.392858 ignition[1216]: Ignition finished successfully Apr 17 23:33:23.421466 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:33:23.450540 ignition[1223]: Ignition 2.19.0 Apr 17 23:33:23.451078 ignition[1223]: Stage: disks Apr 17 23:33:23.451833 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:23.451859 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:33:23.452043 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:33:23.466615 ignition[1223]: PUT result: OK Apr 17 23:33:23.476402 ignition[1223]: disks: disks passed Apr 17 23:33:23.476604 ignition[1223]: Ignition finished successfully Apr 17 23:33:23.480429 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:33:23.488785 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:33:23.491928 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:33:23.497703 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:33:23.503464 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:33:23.506019 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:33:23.523431 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:33:23.583323 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:33:23.590076 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:33:23.605479 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:33:23.713165 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2a4b2d55-130a-4cda-bef1-b1e6ed7bcf6b r/w with ordered data mode. Quota mode: none. Apr 17 23:33:23.713695 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:33:23.719199 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:33:23.737347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:33:23.750365 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:33:23.758878 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:33:23.758988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:33:23.759042 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:33:23.789347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:33:23.800295 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1250) Apr 17 23:33:23.800337 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:33:23.800378 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:33:23.800405 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:33:23.813484 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:33:23.822131 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:33:23.826155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:33:23.919714 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:33:23.931433 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:33:23.942040 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:33:23.950323 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:33:24.145591 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:33:24.157335 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:33:24.161490 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:33:24.192340 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:33:24.195810 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:33:24.238437 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:33:24.251065 ignition[1364]: INFO : Ignition 2.19.0 Apr 17 23:33:24.253329 ignition[1364]: INFO : Stage: mount Apr 17 23:33:24.255104 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:24.257646 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:33:24.257646 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:33:24.264715 ignition[1364]: INFO : PUT result: OK Apr 17 23:33:24.270932 ignition[1364]: INFO : mount: mount passed Apr 17 23:33:24.273326 ignition[1364]: INFO : Ignition finished successfully Apr 17 23:33:24.276033 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:33:24.288338 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:33:24.729748 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:33:24.751160 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1375) Apr 17 23:33:24.755303 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:33:24.755343 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:33:24.755370 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:33:24.762169 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:33:24.765064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:33:24.806386 ignition[1393]: INFO : Ignition 2.19.0 Apr 17 23:33:24.806386 ignition[1393]: INFO : Stage: files Apr 17 23:33:24.810084 ignition[1393]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:24.810084 ignition[1393]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:33:24.810084 ignition[1393]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:33:24.818543 ignition[1393]: INFO : PUT result: OK Apr 17 23:33:24.822375 ignition[1393]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:33:24.825362 ignition[1393]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:33:24.825362 ignition[1393]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:33:24.831754 ignition[1393]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:33:24.831754 ignition[1393]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:33:24.831754 ignition[1393]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:33:24.830653 unknown[1393]: wrote ssh authorized keys file for user: core Apr 17 23:33:24.843513 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:33:24.843513 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:33:24.843513 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:33:24.843513 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 17 23:33:24.934285 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:33:25.081255 systemd-networkd[1199]: eth0: Gained IPv6LL Apr 17 23:33:25.086646 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:33:25.091140 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:33:25.091140 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 17 23:33:25.186072 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:33:25.307171 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:33:25.346697 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:33:25.346697 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:33:25.346697 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:33:25.346697 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:33:25.346697 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:33:25.346697 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 17 23:33:25.619689 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 17 23:33:26.003707 ignition[1393]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:33:26.003707 ignition[1393]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:33:26.011606 ignition[1393]: INFO : files: files passed Apr 17 23:33:26.011606 ignition[1393]: INFO : Ignition finished successfully Apr 17 23:33:26.027754 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:33:26.060206 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:33:26.075564 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:33:26.079080 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:33:26.079355 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:33:26.119046 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:33:26.119046 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:33:26.126786 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:33:26.133503 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:33:26.139755 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:33:26.150620 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:33:26.205392 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:33:26.205617 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:33:26.215040 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:33:26.217901 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:33:26.225360 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:33:26.233414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:33:26.277231 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:33:26.293578 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:33:26.318036 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:33:26.318477 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:33:26.320491 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:33:26.321200 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:33:26.321574 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:33:26.322829 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:33:26.323438 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:33:26.324938 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:33:26.326625 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:33:26.328084 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:33:26.329219 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:33:26.330369 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:33:26.331479 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:33:26.332884 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:33:26.335479 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:33:26.336207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:33:26.336537 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:33:26.337907 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:33:26.339772 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:33:26.340450 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:33:26.367400 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:33:26.368072 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:33:26.369178 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:33:26.407542 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:33:26.407972 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:33:26.439397 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:33:26.439948 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:33:26.457389 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:33:26.459967 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:33:26.460435 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:33:26.472608 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:33:26.474786 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:33:26.475194 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:33:26.486549 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:33:26.488913 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:33:26.505713 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:33:26.512170 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:33:26.529244 ignition[1444]: INFO : Ignition 2.19.0 Apr 17 23:33:26.529244 ignition[1444]: INFO : Stage: umount Apr 17 23:33:26.534730 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:26.534730 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:33:26.534730 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:33:26.544570 ignition[1444]: INFO : PUT result: OK Apr 17 23:33:26.550137 ignition[1444]: INFO : umount: umount passed Apr 17 23:33:26.553194 ignition[1444]: INFO : Ignition finished successfully Apr 17 23:33:26.558084 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:33:26.563494 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:33:26.568304 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:33:26.575031 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:33:26.575274 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:33:26.578656 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:33:26.578826 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:33:26.580203 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:33:26.580300 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:33:26.580852 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:33:26.580934 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:33:26.581597 systemd[1]: Stopped target network.target - Network. Apr 17 23:33:26.581911 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:33:26.581994 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:33:26.582705 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:33:26.583030 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:33:26.599319 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:33:26.602481 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:33:26.604608 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:33:26.606969 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:33:26.607057 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:33:26.609516 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:33:26.609610 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:33:26.612271 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:33:26.612369 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:33:26.614948 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:33:26.615042 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:33:26.617797 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:33:26.617915 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:33:26.623182 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:33:26.628887 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:33:26.635181 systemd-networkd[1199]: eth0: DHCPv6 lease lost Apr 17 23:33:26.644678 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:33:26.644997 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:33:26.648907 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:33:26.649016 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:33:26.676363 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:33:26.679382 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:33:26.679510 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:33:26.711167 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:33:26.714804 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:33:26.715074 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:33:26.729919 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:33:26.739088 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:33:26.744903 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:33:26.745032 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:33:26.754865 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:33:26.754979 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:33:26.782563 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:33:26.785899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:33:26.792970 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:33:26.795206 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:33:26.801041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:33:26.801188 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:33:26.803701 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:33:26.803773 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:33:26.806181 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:33:26.806271 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:33:26.809564 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:33:26.809655 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:33:26.830236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:33:26.830353 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:33:26.847509 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:33:26.851261 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:33:26.851391 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:33:26.857028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:33:26.857164 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:33:26.866252 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:33:26.866444 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:33:26.884086 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:33:26.895414 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:33:26.915748 systemd[1]: Switching root. Apr 17 23:33:26.959160 systemd-journald[252]: Journal stopped Apr 17 23:33:29.127198 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Apr 17 23:33:29.127340 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:33:29.127388 kernel: SELinux: policy capability open_perms=1 Apr 17 23:33:29.127421 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:33:29.127455 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:33:29.127489 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:33:29.127521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:33:29.127551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:33:29.127594 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:33:29.127634 kernel: audit: type=1403 audit(1776468807.393:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:33:29.127671 systemd[1]: Successfully loaded SELinux policy in 52.907ms. Apr 17 23:33:29.127726 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.729ms. Apr 17 23:33:29.130249 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:33:29.130299 systemd[1]: Detected virtualization amazon. Apr 17 23:33:29.130334 systemd[1]: Detected architecture arm64. Apr 17 23:33:29.130366 systemd[1]: Detected first boot. Apr 17 23:33:29.130399 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:33:29.130433 zram_generator::config[1503]: No configuration found. Apr 17 23:33:29.130477 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:33:29.130512 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:33:29.130546 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:33:29.130581 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:33:29.130615 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:33:29.130651 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:33:29.130682 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:33:29.130716 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:33:29.130752 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:33:29.130786 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:33:29.130819 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:33:29.130849 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:33:29.130880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:33:29.130912 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:33:29.130945 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:33:29.130979 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:33:29.131015 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:33:29.131049 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:33:29.131080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:33:29.131151 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:33:29.131192 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:33:29.131226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:33:29.131257 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:33:29.131290 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:33:29.131327 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:33:29.131361 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:33:29.131391 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:33:29.131425 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:33:29.131461 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:33:29.131493 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:33:29.131526 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:33:29.131558 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:33:29.131590 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:33:29.131621 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:33:29.131668 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:33:29.131699 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:33:29.131733 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:33:29.131764 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:33:29.131794 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:33:29.131826 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:33:29.131859 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:33:29.131889 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:33:29.131924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:33:29.131955 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:33:29.132005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:33:29.132041 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:33:29.132074 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:33:29.134572 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:33:29.134955 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 17 23:33:29.135000 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 17 23:33:29.135033 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:33:29.135073 kernel: loop: module loaded Apr 17 23:33:29.135158 kernel: fuse: init (API version 7.39) Apr 17 23:33:29.135219 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:33:29.135252 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:33:29.135283 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:33:29.135313 kernel: ACPI: bus type drm_connector registered Apr 17 23:33:29.135347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:33:29.135385 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:33:29.135421 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:33:29.135461 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:33:29.135493 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:33:29.135525 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:33:29.135556 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:33:29.135587 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:33:29.135617 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:33:29.135648 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:33:29.135682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:33:29.135717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:33:29.135754 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:33:29.135788 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:33:29.135819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:33:29.135853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:33:29.135938 systemd-journald[1606]: Collecting audit messages is disabled. Apr 17 23:33:29.136020 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:33:29.136056 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:33:29.136088 systemd-journald[1606]: Journal started Apr 17 23:33:29.136215 systemd-journald[1606]: Runtime Journal (/run/log/journal/ec22705655e62a6546fc6247a42a33cd) is 8.0M, max 75.3M, 67.3M free. Apr 17 23:33:29.143049 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:33:29.143228 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:33:29.146449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:33:29.153247 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:33:29.159222 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:33:29.162996 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:33:29.171666 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:33:29.196660 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:33:29.207478 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:33:29.221508 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:33:29.225151 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:33:29.239430 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:33:29.264725 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:33:29.272341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:33:29.274666 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:33:29.277325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:33:29.289084 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:33:29.307647 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:33:29.320286 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:33:29.325356 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:33:29.334844 systemd-journald[1606]: Time spent on flushing to /var/log/journal/ec22705655e62a6546fc6247a42a33cd is 103.537ms for 891 entries. Apr 17 23:33:29.334844 systemd-journald[1606]: System Journal (/var/log/journal/ec22705655e62a6546fc6247a42a33cd) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:33:29.464324 systemd-journald[1606]: Received client request to flush runtime journal. Apr 17 23:33:29.361973 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:33:29.371404 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:33:29.404988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:33:29.459598 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Apr 17 23:33:29.459624 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Apr 17 23:33:29.465296 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:33:29.492682 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:33:29.496660 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:33:29.514586 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:33:29.530722 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:33:29.541475 udevadm[1669]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:33:29.606631 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:33:29.624693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:33:29.683932 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Apr 17 23:33:29.684734 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Apr 17 23:33:29.696004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:33:30.337090 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:33:30.347477 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:33:30.413753 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Apr 17 23:33:30.457089 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:33:30.469631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:33:30.508550 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:33:30.615313 (udev-worker)[1688]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:33:30.641202 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 17 23:33:30.701553 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:33:30.898011 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1705) Apr 17 23:33:30.906318 systemd-networkd[1686]: lo: Link UP Apr 17 23:33:30.906345 systemd-networkd[1686]: lo: Gained carrier Apr 17 23:33:30.911237 systemd-networkd[1686]: Enumeration completed Apr 17 23:33:30.911486 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:33:30.913687 systemd-networkd[1686]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:33:30.913695 systemd-networkd[1686]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:33:30.925201 systemd-networkd[1686]: eth0: Link UP Apr 17 23:33:30.928838 systemd-networkd[1686]: eth0: Gained carrier Apr 17 23:33:30.928900 systemd-networkd[1686]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:33:30.936881 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:33:30.946284 systemd-networkd[1686]: eth0: DHCPv4 address 172.31.22.159/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:33:31.203743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:33:31.221637 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:33:31.238630 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:33:31.254153 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:33:31.289317 lvm[1809]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:33:31.344147 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:33:31.353790 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:33:31.365590 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:33:31.385256 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:33:31.388044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:33:31.431657 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:33:31.437050 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:33:31.440350 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:33:31.440691 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:33:31.443604 systemd[1]: Reached target machines.target - Containers. Apr 17 23:33:31.448305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:33:31.458601 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:33:31.471685 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:33:31.474620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:33:31.478982 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:33:31.498698 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:33:31.515839 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:33:31.531765 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:33:31.571713 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:33:31.574229 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:33:31.592368 kernel: loop0: detected capacity change from 0 to 52536 Apr 17 23:33:31.595682 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:33:31.697906 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:33:31.738165 kernel: loop1: detected capacity change from 0 to 114432 Apr 17 23:33:31.791238 kernel: loop2: detected capacity change from 0 to 114328 Apr 17 23:33:31.852154 kernel: loop3: detected capacity change from 0 to 209336 Apr 17 23:33:32.165179 kernel: loop4: detected capacity change from 0 to 52536 Apr 17 23:33:32.199549 kernel: loop5: detected capacity change from 0 to 114432 Apr 17 23:33:32.223482 kernel: loop6: detected capacity change from 0 to 114328 Apr 17 23:33:32.260246 kernel: loop7: detected capacity change from 0 to 209336 Apr 17 23:33:32.294262 ldconfig[1822]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:33:32.301348 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:33:32.305741 (sd-merge)[1839]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:33:32.308635 (sd-merge)[1839]: Merged extensions into '/usr'. Apr 17 23:33:32.321545 systemd[1]: Reloading requested from client PID 1826 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:33:32.321591 systemd[1]: Reloading... Apr 17 23:33:32.443177 zram_generator::config[1866]: No configuration found. Apr 17 23:33:32.761335 systemd-networkd[1686]: eth0: Gained IPv6LL Apr 17 23:33:32.764449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:33:32.940056 systemd[1]: Reloading finished in 617 ms. Apr 17 23:33:32.968569 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:33:32.972632 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:33:32.989600 systemd[1]: Starting ensure-sysext.service... Apr 17 23:33:32.999502 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:33:33.020294 systemd[1]: Reloading requested from client PID 1927 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:33:33.020344 systemd[1]: Reloading... Apr 17 23:33:33.058274 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:33:33.059051 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:33:33.062301 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:33:33.063325 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Apr 17 23:33:33.063508 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Apr 17 23:33:33.071813 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:33:33.072092 systemd-tmpfiles[1928]: Skipping /boot Apr 17 23:33:33.100177 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:33:33.100203 systemd-tmpfiles[1928]: Skipping /boot Apr 17 23:33:33.233210 zram_generator::config[1962]: No configuration found. Apr 17 23:33:33.490149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:33:33.661355 systemd[1]: Reloading finished in 640 ms. Apr 17 23:33:33.700559 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:33:33.718521 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:33:33.736867 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:33:33.743468 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:33:33.762412 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:33:33.782680 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:33:33.818571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:33:33.823730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:33:33.840686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:33:33.867462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:33:33.872387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:33:33.894999 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:33:33.927408 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:33:33.927932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:33:33.933808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:33:33.935680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:33:33.943898 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:33:33.948376 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:33:33.974311 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:33:33.991598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:33:34.006716 augenrules[2050]: No rules Apr 17 23:33:34.007101 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:33:34.011553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:33:34.011672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:33:34.011810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:33:34.011886 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:33:34.034410 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:33:34.040841 systemd[1]: Finished ensure-sysext.service. Apr 17 23:33:34.047665 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:33:34.054460 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:33:34.056853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:33:34.110917 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:33:34.122176 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:33:34.130598 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:33:34.158584 systemd-resolved[2020]: Positive Trust Anchors: Apr 17 23:33:34.158624 systemd-resolved[2020]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:33:34.158690 systemd-resolved[2020]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:33:34.175259 systemd-resolved[2020]: Defaulting to hostname 'linux'. Apr 17 23:33:34.179604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:33:34.182474 systemd[1]: Reached target network.target - Network. Apr 17 23:33:34.184689 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:33:34.187409 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:33:34.190446 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:33:34.193415 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:33:34.196664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:33:34.200300 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:33:34.203335 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:33:34.206553 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:33:34.209868 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:33:34.209964 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:33:34.212470 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:33:34.216077 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:33:34.223083 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:33:34.228870 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:33:34.236373 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:33:34.239078 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:33:34.243268 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:33:34.246016 systemd[1]: System is tainted: cgroupsv1 Apr 17 23:33:34.247289 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:33:34.247363 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:33:34.258377 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:33:34.266515 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:33:34.284736 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:33:34.292846 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:33:34.313434 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:33:34.316078 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:33:34.333708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:34.348395 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:33:34.367682 jq[2075]: false Apr 17 23:33:34.386414 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:33:34.397510 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:33:34.420361 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:33:34.446344 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:33:34.462964 dbus-daemon[2074]: [system] SELinux support is enabled Apr 17 23:33:34.474156 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:33:34.497884 dbus-daemon[2074]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1686 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:33:34.504505 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:33:34.536168 ntpd[2083]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:13 UTC 2026 (1): Starting Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:13 UTC 2026 (1): Starting Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: ---------------------------------------------------- Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: corporation. Support and training for ntp-4 are Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: available at https://www.nwtime.org/support Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: ---------------------------------------------------- Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: proto: precision = 0.096 usec (-23) Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: basedate set to 2026-04-05 Apr 17 23:33:34.558666 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: gps base set to 2026-04-05 (week 2413) Apr 17 23:33:34.559564 extend-filesystems[2076]: Found loop4 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found loop5 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found loop6 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found loop7 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found nvme0n1 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found nvme0n1p1 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found nvme0n1p2 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found nvme0n1p3 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found usr Apr 17 23:33:34.559564 extend-filesystems[2076]: Found nvme0n1p4 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found nvme0n1p6 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found nvme0n1p7 Apr 17 23:33:34.559564 extend-filesystems[2076]: Found nvme0n1p9 Apr 17 23:33:34.559564 extend-filesystems[2076]: Checking size of /dev/nvme0n1p9 Apr 17 23:33:34.536240 ntpd[2083]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:33:34.545266 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:33:34.626825 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:33:34.626825 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:33:34.626825 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:33:34.626825 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Listen normally on 3 eth0 172.31.22.159:123 Apr 17 23:33:34.626825 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Listen normally on 4 lo [::1]:123 Apr 17 23:33:34.626825 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Listen normally on 5 eth0 [fe80::48b:87ff:fe81:3499%2]:123 Apr 17 23:33:34.626825 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: Listening on routing socket on fd #22 for interface updates Apr 17 23:33:34.536262 ntpd[2083]: ---------------------------------------------------- Apr 17 23:33:34.552061 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:33:34.536284 ntpd[2083]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:33:34.573518 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:33:34.536304 ntpd[2083]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:33:34.590563 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:33:34.536325 ntpd[2083]: corporation. Support and training for ntp-4 are Apr 17 23:33:34.536344 ntpd[2083]: available at https://www.nwtime.org/support Apr 17 23:33:34.536364 ntpd[2083]: ---------------------------------------------------- Apr 17 23:33:34.555354 ntpd[2083]: proto: precision = 0.096 usec (-23) Apr 17 23:33:34.555875 ntpd[2083]: basedate set to 2026-04-05 Apr 17 23:33:34.555906 ntpd[2083]: gps base set to 2026-04-05 (week 2413) Apr 17 23:33:34.565855 ntpd[2083]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:33:34.565956 ntpd[2083]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:33:34.576498 ntpd[2083]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:33:34.584266 ntpd[2083]: Listen normally on 3 eth0 172.31.22.159:123 Apr 17 23:33:34.586104 ntpd[2083]: Listen normally on 4 lo [::1]:123 Apr 17 23:33:34.586283 ntpd[2083]: Listen normally on 5 eth0 [fe80::48b:87ff:fe81:3499%2]:123 Apr 17 23:33:34.586366 ntpd[2083]: Listening on routing socket on fd #22 for interface updates Apr 17 23:33:34.630570 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:33:34.658876 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:33:34.659257 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:33:34.659257 ntpd[2083]: 17 Apr 23:33:34 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:33:34.658950 ntpd[2083]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:33:34.692203 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:33:34.692793 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:33:34.709968 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:33:34.711657 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:33:34.722581 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:33:34.749768 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:33:34.750476 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:33:34.758182 extend-filesystems[2076]: Resized partition /dev/nvme0n1p9 Apr 17 23:33:34.776446 jq[2106]: true Apr 17 23:33:34.787442 extend-filesystems[2125]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:33:34.799415 update_engine[2105]: I20260417 23:33:34.782785 2105 main.cc:92] Flatcar Update Engine starting Apr 17 23:33:34.812632 update_engine[2105]: I20260417 23:33:34.811439 2105 update_check_scheduler.cc:74] Next update check in 5m59s Apr 17 23:33:34.814152 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:33:34.856416 dbus-daemon[2074]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:33:34.858014 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:33:34.870298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:33:34.880570 jq[2135]: true Apr 17 23:33:34.870386 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:33:34.882008 (ntainerd)[2136]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:33:34.882706 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:33:34.885439 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:33:34.885482 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:33:34.891494 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:33:34.895448 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:33:35.003626 coreos-metadata[2072]: Apr 17 23:33:34.999 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:33:35.008967 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:33:35.076728 tar[2121]: linux-arm64/LICENSE Apr 17 23:33:35.076728 tar[2121]: linux-arm64/helm Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.007 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.016 INFO Fetch successful Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.016 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.020 INFO Fetch successful Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.025 INFO Fetch successful Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.025 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.026 INFO Fetch successful Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.040 INFO Fetch failed with 404: resource not found Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.040 INFO Fetch successful Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.040 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.054 INFO Fetch successful Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.055 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.055 INFO Fetch successful Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.055 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.056 INFO Fetch successful Apr 17 23:33:35.077298 coreos-metadata[2072]: Apr 17 23:33:35.056 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:33:35.152590 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:33:35.111046 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:33:35.153253 coreos-metadata[2072]: Apr 17 23:33:35.084 INFO Fetch successful Apr 17 23:33:35.121760 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:33:35.167882 extend-filesystems[2125]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:33:35.167882 extend-filesystems[2125]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:33:35.167882 extend-filesystems[2125]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:33:35.205992 extend-filesystems[2076]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:33:35.174829 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:33:35.175487 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:33:35.274408 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:33:35.280662 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:33:35.359247 bash[2181]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:33:35.366100 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:33:35.382594 systemd[1]: Starting sshkeys.service... Apr 17 23:33:35.509394 systemd-logind[2097]: Watching system buttons on /dev/input/event0 (Power Button) Apr 17 23:33:35.509459 systemd-logind[2097]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 17 23:33:35.510389 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:33:35.524853 systemd-logind[2097]: New seat seat0. Apr 17 23:33:35.525828 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:33:35.543673 containerd[2136]: time="2026-04-17T23:33:35.540778836Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:33:35.545389 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:33:35.583130 amazon-ssm-agent[2165]: Initializing new seelog logger Apr 17 23:33:35.583130 amazon-ssm-agent[2165]: New Seelog Logger Creation Complete Apr 17 23:33:35.583130 amazon-ssm-agent[2165]: 2026/04/17 23:33:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:33:35.583130 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:33:35.583130 amazon-ssm-agent[2165]: 2026/04/17 23:33:35 processing appconfig overrides Apr 17 23:33:35.583130 amazon-ssm-agent[2165]: 2026/04/17 23:33:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:33:35.583130 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:33:35.583130 amazon-ssm-agent[2165]: 2026/04/17 23:33:35 processing appconfig overrides Apr 17 23:33:35.587167 amazon-ssm-agent[2165]: 2026/04/17 23:33:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:33:35.587167 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:33:35.587167 amazon-ssm-agent[2165]: 2026/04/17 23:33:35 processing appconfig overrides Apr 17 23:33:35.599052 amazon-ssm-agent[2165]: 2026-04-17 23:33:35 INFO Proxy environment variables: Apr 17 23:33:35.600311 amazon-ssm-agent[2165]: 2026/04/17 23:33:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:33:35.600311 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:33:35.600475 amazon-ssm-agent[2165]: 2026/04/17 23:33:35 processing appconfig overrides Apr 17 23:33:35.712286 amazon-ssm-agent[2165]: 2026-04-17 23:33:35 INFO https_proxy: Apr 17 23:33:35.841787 amazon-ssm-agent[2165]: 2026-04-17 23:33:35 INFO http_proxy: Apr 17 23:33:35.843983 containerd[2136]: time="2026-04-17T23:33:35.843888074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:35.857765 coreos-metadata[2196]: Apr 17 23:33:35.857 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:33:35.864573 coreos-metadata[2196]: Apr 17 23:33:35.864 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:33:35.868453 coreos-metadata[2196]: Apr 17 23:33:35.868 INFO Fetch successful Apr 17 23:33:35.868453 coreos-metadata[2196]: Apr 17 23:33:35.868 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:33:35.873065 coreos-metadata[2196]: Apr 17 23:33:35.872 INFO Fetch successful Apr 17 23:33:35.877151 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (2198) Apr 17 23:33:35.878670 containerd[2136]: time="2026-04-17T23:33:35.878425706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:35.878670 containerd[2136]: time="2026-04-17T23:33:35.878516570Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:33:35.878670 containerd[2136]: time="2026-04-17T23:33:35.878575070Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:33:35.879191 containerd[2136]: time="2026-04-17T23:33:35.878967242Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:33:35.879191 containerd[2136]: time="2026-04-17T23:33:35.879042794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:35.879328 containerd[2136]: time="2026-04-17T23:33:35.879251882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:35.879328 containerd[2136]: time="2026-04-17T23:33:35.879288866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:35.879893 containerd[2136]: time="2026-04-17T23:33:35.879748250Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:35.879893 containerd[2136]: time="2026-04-17T23:33:35.879808934Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:35.879893 containerd[2136]: time="2026-04-17T23:33:35.879844886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:35.879893 containerd[2136]: time="2026-04-17T23:33:35.879870638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:35.881524 unknown[2196]: wrote ssh authorized keys file for user: core Apr 17 23:33:35.893149 containerd[2136]: time="2026-04-17T23:33:35.891365606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:35.893149 containerd[2136]: time="2026-04-17T23:33:35.892368074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:35.902605 containerd[2136]: time="2026-04-17T23:33:35.902462858Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:35.902605 containerd[2136]: time="2026-04-17T23:33:35.902557058Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:33:35.904323 containerd[2136]: time="2026-04-17T23:33:35.902940746Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:33:35.904323 containerd[2136]: time="2026-04-17T23:33:35.903179114Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:33:35.934291 containerd[2136]: time="2026-04-17T23:33:35.934208822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:33:35.934459 containerd[2136]: time="2026-04-17T23:33:35.934323986Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:33:35.934459 containerd[2136]: time="2026-04-17T23:33:35.934364114Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:33:35.934459 containerd[2136]: time="2026-04-17T23:33:35.934400162Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:33:35.934459 containerd[2136]: time="2026-04-17T23:33:35.934435430Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:33:35.934851 containerd[2136]: time="2026-04-17T23:33:35.934769606Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:33:35.935572 containerd[2136]: time="2026-04-17T23:33:35.935488634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:33:35.935869 containerd[2136]: time="2026-04-17T23:33:35.935797274Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:33:35.935984 containerd[2136]: time="2026-04-17T23:33:35.935888618Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:33:35.935984 containerd[2136]: time="2026-04-17T23:33:35.935925998Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:33:35.936083 containerd[2136]: time="2026-04-17T23:33:35.935979890Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:33:35.936083 containerd[2136]: time="2026-04-17T23:33:35.936021842Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:33:35.936083 containerd[2136]: time="2026-04-17T23:33:35.936070490Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:33:35.947289 amazon-ssm-agent[2165]: 2026-04-17 23:33:35 INFO no_proxy: Apr 17 23:33:35.948467 containerd[2136]: time="2026-04-17T23:33:35.936105098Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:33:35.948836 containerd[2136]: time="2026-04-17T23:33:35.948697982Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:33:35.948836 containerd[2136]: time="2026-04-17T23:33:35.948779654Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:33:35.949431 containerd[2136]: time="2026-04-17T23:33:35.949356110Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:33:35.952278 containerd[2136]: time="2026-04-17T23:33:35.952195466Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954189386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954295718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954343394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954391910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954436586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954480818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954516218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954571730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954616022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954666938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954707666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954752054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954792914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954839114Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:33:35.955203 containerd[2136]: time="2026-04-17T23:33:35.954901346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955937 containerd[2136]: time="2026-04-17T23:33:35.954942062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.955937 containerd[2136]: time="2026-04-17T23:33:35.954978206Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:33:35.958631 containerd[2136]: time="2026-04-17T23:33:35.956436014Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:33:35.961153 containerd[2136]: time="2026-04-17T23:33:35.959803118Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:33:35.961153 containerd[2136]: time="2026-04-17T23:33:35.960045422Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:33:35.961153 containerd[2136]: time="2026-04-17T23:33:35.960131726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:33:35.961153 containerd[2136]: time="2026-04-17T23:33:35.960167270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.961153 containerd[2136]: time="2026-04-17T23:33:35.960220154Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:33:35.961153 containerd[2136]: time="2026-04-17T23:33:35.960259358Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:33:35.961153 containerd[2136]: time="2026-04-17T23:33:35.960290642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:33:35.961601 containerd[2136]: time="2026-04-17T23:33:35.960902726Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:33:35.961601 containerd[2136]: time="2026-04-17T23:33:35.961043438Z" level=info msg="Connect containerd service" Apr 17 23:33:35.966241 containerd[2136]: time="2026-04-17T23:33:35.964200818Z" level=info msg="using legacy CRI server" Apr 17 23:33:35.966241 containerd[2136]: time="2026-04-17T23:33:35.964288682Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:33:35.966241 containerd[2136]: time="2026-04-17T23:33:35.965385194Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:33:35.968996 containerd[2136]: time="2026-04-17T23:33:35.968921342Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:33:35.971825 containerd[2136]: time="2026-04-17T23:33:35.971742122Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:33:35.973302 containerd[2136]: time="2026-04-17T23:33:35.973082558Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:33:35.976000 containerd[2136]: time="2026-04-17T23:33:35.975741386Z" level=info msg="Start subscribing containerd event" Apr 17 23:33:35.980320 containerd[2136]: time="2026-04-17T23:33:35.975939374Z" level=info msg="Start recovering state" Apr 17 23:33:35.983055 containerd[2136]: time="2026-04-17T23:33:35.982688042Z" level=info msg="Start event monitor" Apr 17 23:33:35.983055 containerd[2136]: time="2026-04-17T23:33:35.982872902Z" level=info msg="Start snapshots syncer" Apr 17 23:33:35.983389 containerd[2136]: time="2026-04-17T23:33:35.983327966Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:33:35.983995 containerd[2136]: time="2026-04-17T23:33:35.983917622Z" level=info msg="Start streaming server" Apr 17 23:33:35.987010 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:33:35.991999 containerd[2136]: time="2026-04-17T23:33:35.986949302Z" level=info msg="containerd successfully booted in 0.468453s" Apr 17 23:33:36.002818 update-ssh-keys[2222]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:33:36.045490 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:33:36.053235 amazon-ssm-agent[2165]: 2026-04-17 23:33:35 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:33:36.066521 systemd[1]: Finished sshkeys.service. Apr 17 23:33:36.153155 amazon-ssm-agent[2165]: 2026-04-17 23:33:35 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:33:36.165059 dbus-daemon[2074]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:33:36.167093 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:33:36.174704 dbus-daemon[2074]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2142 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:33:36.211919 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:33:36.243179 locksmithd[2143]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:33:36.247783 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO Agent will take identity from EC2 Apr 17 23:33:36.280513 polkitd[2267]: Started polkitd version 121 Apr 17 23:33:36.353247 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:33:36.368883 polkitd[2267]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:33:36.369040 polkitd[2267]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:33:36.376416 polkitd[2267]: Finished loading, compiling and executing 2 rules Apr 17 23:33:36.387103 dbus-daemon[2074]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:33:36.387498 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:33:36.390735 polkitd[2267]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:33:36.449178 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:33:36.463910 systemd-resolved[2020]: System hostname changed to 'ip-172-31-22-159'. Apr 17 23:33:36.467209 systemd-hostnamed[2142]: Hostname set to (transient) Apr 17 23:33:36.546917 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:33:36.647203 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:33:36.680680 sshd_keygen[2114]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:33:36.747234 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 17 23:33:36.810045 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:33:36.828632 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:33:36.837102 systemd[1]: Started sshd@0-172.31.22.159:22-4.175.71.9:56596.service - OpenSSH per-connection server daemon (4.175.71.9:56596). Apr 17 23:33:36.849263 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:33:36.876075 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:33:36.876653 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:33:36.894035 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:33:36.933509 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:33:36.944719 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:33:36.951164 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:33:36.957692 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:33:36.961688 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:33:37.049727 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [Registrar] Starting registrar module Apr 17 23:33:37.151196 amazon-ssm-agent[2165]: 2026-04-17 23:33:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:33:37.295156 tar[2121]: linux-arm64/README.md Apr 17 23:33:37.328879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:33:37.714812 amazon-ssm-agent[2165]: 2026-04-17 23:33:37 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:33:37.745168 amazon-ssm-agent[2165]: 2026-04-17 23:33:37 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:33:37.745168 amazon-ssm-agent[2165]: 2026-04-17 23:33:37 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:33:37.745168 amazon-ssm-agent[2165]: 2026-04-17 23:33:37 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:33:37.815210 amazon-ssm-agent[2165]: 2026-04-17 23:33:37 INFO [CredentialRefresher] Next credential rotation will be in 30.166658896766666 minutes Apr 17 23:33:37.939411 sshd[2340]: Accepted publickey for core from 4.175.71.9 port 56596 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:33:37.942417 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:37.964209 systemd-logind[2097]: New session 1 of user core. Apr 17 23:33:37.966757 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:33:37.980759 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:33:38.011496 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:33:38.028673 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:33:38.056884 (systemd)[2361]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:33:38.290413 systemd[2361]: Queued start job for default target default.target. Apr 17 23:33:38.291608 systemd[2361]: Created slice app.slice - User Application Slice. Apr 17 23:33:38.291666 systemd[2361]: Reached target paths.target - Paths. Apr 17 23:33:38.291698 systemd[2361]: Reached target timers.target - Timers. Apr 17 23:33:38.297275 systemd[2361]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:33:38.323453 systemd[2361]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:33:38.323584 systemd[2361]: Reached target sockets.target - Sockets. Apr 17 23:33:38.323616 systemd[2361]: Reached target basic.target - Basic System. Apr 17 23:33:38.323697 systemd[2361]: Reached target default.target - Main User Target. Apr 17 23:33:38.323754 systemd[2361]: Startup finished in 255ms. Apr 17 23:33:38.324844 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:33:38.339771 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:33:38.798341 amazon-ssm-agent[2165]: 2026-04-17 23:33:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:33:38.897694 amazon-ssm-agent[2165]: 2026-04-17 23:33:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2374) started Apr 17 23:33:38.999275 amazon-ssm-agent[2165]: 2026-04-17 23:33:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:33:39.067737 systemd[1]: Started sshd@1-172.31.22.159:22-4.175.71.9:56042.service - OpenSSH per-connection server daemon (4.175.71.9:56042). Apr 17 23:33:39.164465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:39.169243 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:33:39.173329 systemd[1]: Startup finished in 9.843s (kernel) + 11.832s (userspace) = 21.676s. Apr 17 23:33:39.181812 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:33:40.109466 sshd[2384]: Accepted publickey for core from 4.175.71.9 port 56042 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:33:40.112347 sshd[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:40.124416 systemd-logind[2097]: New session 2 of user core. Apr 17 23:33:40.131653 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:33:40.629543 kubelet[2394]: E0417 23:33:40.629483 2394 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:33:40.635284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:33:40.635708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:33:40.823735 sshd[2384]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:40.829332 systemd-logind[2097]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:33:40.830766 systemd[1]: sshd@1-172.31.22.159:22-4.175.71.9:56042.service: Deactivated successfully. Apr 17 23:33:40.837535 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:33:40.839319 systemd-logind[2097]: Removed session 2. Apr 17 23:33:41.005580 systemd[1]: Started sshd@2-172.31.22.159:22-4.175.71.9:56056.service - OpenSSH per-connection server daemon (4.175.71.9:56056). Apr 17 23:33:41.250143 systemd-resolved[2020]: Clock change detected. Flushing caches. Apr 17 23:33:41.749380 sshd[2412]: Accepted publickey for core from 4.175.71.9 port 56056 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:33:41.751071 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:41.759331 systemd-logind[2097]: New session 3 of user core. Apr 17 23:33:41.769439 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:33:42.452220 sshd[2412]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:42.460087 systemd[1]: sshd@2-172.31.22.159:22-4.175.71.9:56056.service: Deactivated successfully. Apr 17 23:33:42.465392 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:33:42.466192 systemd-logind[2097]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:33:42.468353 systemd-logind[2097]: Removed session 3. Apr 17 23:33:42.627397 systemd[1]: Started sshd@3-172.31.22.159:22-4.175.71.9:56064.service - OpenSSH per-connection server daemon (4.175.71.9:56064). Apr 17 23:33:43.673407 sshd[2420]: Accepted publickey for core from 4.175.71.9 port 56064 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:33:43.675190 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:43.682945 systemd-logind[2097]: New session 4 of user core. Apr 17 23:33:43.690484 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:33:44.385239 sshd[2420]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:44.393774 systemd[1]: sshd@3-172.31.22.159:22-4.175.71.9:56064.service: Deactivated successfully. Apr 17 23:33:44.395160 systemd-logind[2097]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:33:44.400716 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:33:44.403247 systemd-logind[2097]: Removed session 4. Apr 17 23:33:44.575341 systemd[1]: Started sshd@4-172.31.22.159:22-4.175.71.9:56078.service - OpenSSH per-connection server daemon (4.175.71.9:56078). Apr 17 23:33:45.606288 sshd[2428]: Accepted publickey for core from 4.175.71.9 port 56078 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:33:45.608791 sshd[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:45.617983 systemd-logind[2097]: New session 5 of user core. Apr 17 23:33:45.626347 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:33:46.171845 sudo[2432]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:33:46.172707 sudo[2432]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:33:46.188536 sudo[2432]: pam_unix(sudo:session): session closed for user root Apr 17 23:33:46.356176 sshd[2428]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:46.363484 systemd[1]: sshd@4-172.31.22.159:22-4.175.71.9:56078.service: Deactivated successfully. Apr 17 23:33:46.369550 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:33:46.370909 systemd-logind[2097]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:33:46.373162 systemd-logind[2097]: Removed session 5. Apr 17 23:33:46.532766 systemd[1]: Started sshd@5-172.31.22.159:22-4.175.71.9:40770.service - OpenSSH per-connection server daemon (4.175.71.9:40770). Apr 17 23:33:47.571489 sshd[2437]: Accepted publickey for core from 4.175.71.9 port 40770 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:33:47.574182 sshd[2437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:47.582679 systemd-logind[2097]: New session 6 of user core. Apr 17 23:33:47.589686 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:33:48.118936 sudo[2442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:33:48.119562 sudo[2442]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:33:48.126566 sudo[2442]: pam_unix(sudo:session): session closed for user root Apr 17 23:33:48.136604 sudo[2441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:33:48.137411 sudo[2441]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:33:48.161470 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:33:48.172783 auditctl[2445]: No rules Apr 17 23:33:48.173786 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:33:48.174321 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:33:48.186493 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:33:48.234421 augenrules[2464]: No rules Apr 17 23:33:48.238142 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:33:48.242807 sudo[2441]: pam_unix(sudo:session): session closed for user root Apr 17 23:33:48.409598 sshd[2437]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:48.415751 systemd-logind[2097]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:33:48.419450 systemd[1]: sshd@5-172.31.22.159:22-4.175.71.9:40770.service: Deactivated successfully. Apr 17 23:33:48.424476 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:33:48.426815 systemd-logind[2097]: Removed session 6. Apr 17 23:33:48.596320 systemd[1]: Started sshd@6-172.31.22.159:22-4.175.71.9:40778.service - OpenSSH per-connection server daemon (4.175.71.9:40778). Apr 17 23:33:49.627058 sshd[2473]: Accepted publickey for core from 4.175.71.9 port 40778 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:33:49.629610 sshd[2473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:49.637993 systemd-logind[2097]: New session 7 of user core. Apr 17 23:33:49.645374 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:33:50.173247 sudo[2477]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:33:50.174432 sudo[2477]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:33:50.468362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:33:50.482268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:50.695810 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:33:50.701952 (dockerd)[2498]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:33:50.914328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:50.935670 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:33:51.021391 kubelet[2511]: E0417 23:33:51.021300 2511 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:33:51.031718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:33:51.032147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:33:51.178003 dockerd[2498]: time="2026-04-17T23:33:51.177794529Z" level=info msg="Starting up" Apr 17 23:33:51.402425 dockerd[2498]: time="2026-04-17T23:33:51.401840662Z" level=info msg="Loading containers: start." Apr 17 23:33:51.560030 kernel: Initializing XFRM netlink socket Apr 17 23:33:51.599431 (udev-worker)[2537]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:33:51.685574 systemd-networkd[1686]: docker0: Link UP Apr 17 23:33:51.707372 dockerd[2498]: time="2026-04-17T23:33:51.707293439Z" level=info msg="Loading containers: done." Apr 17 23:33:51.735916 dockerd[2498]: time="2026-04-17T23:33:51.735286992Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:33:51.735916 dockerd[2498]: time="2026-04-17T23:33:51.735431688Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:33:51.735916 dockerd[2498]: time="2026-04-17T23:33:51.735612780Z" level=info msg="Daemon has completed initialization" Apr 17 23:33:51.798376 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:33:51.799116 dockerd[2498]: time="2026-04-17T23:33:51.797992740Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:33:53.146724 containerd[2136]: time="2026-04-17T23:33:53.146669087Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:33:53.735431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1865734174.mount: Deactivated successfully. Apr 17 23:33:55.297247 containerd[2136]: time="2026-04-17T23:33:55.297138793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:55.299566 containerd[2136]: time="2026-04-17T23:33:55.299509945Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 17 23:33:55.300430 containerd[2136]: time="2026-04-17T23:33:55.300375841Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:55.312902 containerd[2136]: time="2026-04-17T23:33:55.311795977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:55.315074 containerd[2136]: time="2026-04-17T23:33:55.314994721Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.168256742s" Apr 17 23:33:55.315229 containerd[2136]: time="2026-04-17T23:33:55.315075949Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 17 23:33:55.316642 containerd[2136]: time="2026-04-17T23:33:55.316597225Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:33:57.094315 containerd[2136]: time="2026-04-17T23:33:57.094221386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:57.097459 containerd[2136]: time="2026-04-17T23:33:57.097393598Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 17 23:33:57.099631 containerd[2136]: time="2026-04-17T23:33:57.099562598Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:57.108911 containerd[2136]: time="2026-04-17T23:33:57.106843250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:57.109349 containerd[2136]: time="2026-04-17T23:33:57.109301306Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.792466325s" Apr 17 23:33:57.109483 containerd[2136]: time="2026-04-17T23:33:57.109453478Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 17 23:33:57.110347 containerd[2136]: time="2026-04-17T23:33:57.110276078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:33:58.568034 containerd[2136]: time="2026-04-17T23:33:58.567949194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:58.570404 containerd[2136]: time="2026-04-17T23:33:58.569939742Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 17 23:33:58.573042 containerd[2136]: time="2026-04-17T23:33:58.572352114Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:58.578458 containerd[2136]: time="2026-04-17T23:33:58.578374878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:58.581276 containerd[2136]: time="2026-04-17T23:33:58.580942434Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.470601244s" Apr 17 23:33:58.581276 containerd[2136]: time="2026-04-17T23:33:58.581006550Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 17 23:33:58.583015 containerd[2136]: time="2026-04-17T23:33:58.581672274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:33:59.872761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218220789.mount: Deactivated successfully. Apr 17 23:34:00.491096 containerd[2136]: time="2026-04-17T23:34:00.491019403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:00.495016 containerd[2136]: time="2026-04-17T23:34:00.494943343Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 17 23:34:00.496080 containerd[2136]: time="2026-04-17T23:34:00.496012975Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:00.508104 containerd[2136]: time="2026-04-17T23:34:00.508009543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:00.509815 containerd[2136]: time="2026-04-17T23:34:00.509762155Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.927927317s" Apr 17 23:34:00.510420 containerd[2136]: time="2026-04-17T23:34:00.510002431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 17 23:34:00.512179 containerd[2136]: time="2026-04-17T23:34:00.512122459Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:34:01.027915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759362080.mount: Deactivated successfully. Apr 17 23:34:01.218566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:34:01.226601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:01.681193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:01.698254 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:34:01.782949 kubelet[2751]: E0417 23:34:01.782815 2751 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:34:01.790181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:34:01.791770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:34:02.518122 containerd[2136]: time="2026-04-17T23:34:02.518051277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:02.527048 containerd[2136]: time="2026-04-17T23:34:02.526951725Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 17 23:34:02.542366 containerd[2136]: time="2026-04-17T23:34:02.540952953Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:02.563931 containerd[2136]: time="2026-04-17T23:34:02.563741661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:02.566503 containerd[2136]: time="2026-04-17T23:34:02.566250429Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.054063866s" Apr 17 23:34:02.566503 containerd[2136]: time="2026-04-17T23:34:02.566326197Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 17 23:34:02.567948 containerd[2136]: time="2026-04-17T23:34:02.567717357Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:34:03.662700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1617135901.mount: Deactivated successfully. Apr 17 23:34:03.671399 containerd[2136]: time="2026-04-17T23:34:03.671323811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:03.672605 containerd[2136]: time="2026-04-17T23:34:03.672550691Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 17 23:34:03.674218 containerd[2136]: time="2026-04-17T23:34:03.674177795Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:03.679915 containerd[2136]: time="2026-04-17T23:34:03.679663919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:03.681825 containerd[2136]: time="2026-04-17T23:34:03.681572927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.113780234s" Apr 17 23:34:03.681825 containerd[2136]: time="2026-04-17T23:34:03.681624731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 17 23:34:03.682636 containerd[2136]: time="2026-04-17T23:34:03.682590335Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:34:04.208813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482587778.mount: Deactivated successfully. Apr 17 23:34:05.427595 containerd[2136]: time="2026-04-17T23:34:05.427527144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:05.429453 containerd[2136]: time="2026-04-17T23:34:05.429393252Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 17 23:34:05.431635 containerd[2136]: time="2026-04-17T23:34:05.430716048Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:05.437405 containerd[2136]: time="2026-04-17T23:34:05.437352696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:05.440054 containerd[2136]: time="2026-04-17T23:34:05.439993476Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.757345169s" Apr 17 23:34:05.440157 containerd[2136]: time="2026-04-17T23:34:05.440051976Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 17 23:34:06.216304 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:34:11.968503 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 23:34:11.977412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:12.259855 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:34:12.260178 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:34:12.263019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:12.276523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:12.330530 systemd[1]: Reloading requested from client PID 2906 ('systemctl') (unit session-7.scope)... Apr 17 23:34:12.330564 systemd[1]: Reloading... Apr 17 23:34:12.574045 zram_generator::config[2949]: No configuration found. Apr 17 23:34:12.827478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:34:12.998633 systemd[1]: Reloading finished in 666 ms. Apr 17 23:34:13.094791 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:34:13.095048 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:34:13.096614 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:13.107835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:13.425351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:13.439617 (kubelet)[3021]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:34:13.507148 kubelet[3021]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:34:13.507148 kubelet[3021]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:34:13.507148 kubelet[3021]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:34:13.507747 kubelet[3021]: I0417 23:34:13.507227 3021 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:34:15.485013 kubelet[3021]: I0417 23:34:15.484943 3021 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:34:15.485013 kubelet[3021]: I0417 23:34:15.484993 3021 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:34:15.485695 kubelet[3021]: I0417 23:34:15.485351 3021 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:34:15.539914 kubelet[3021]: I0417 23:34:15.539843 3021 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:34:15.543920 kubelet[3021]: E0417 23:34:15.542531 3021 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.22.159:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:34:15.556435 kubelet[3021]: E0417 23:34:15.556355 3021 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:34:15.556435 kubelet[3021]: I0417 23:34:15.556429 3021 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:34:15.564919 kubelet[3021]: I0417 23:34:15.563192 3021 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:34:15.564919 kubelet[3021]: I0417 23:34:15.563935 3021 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:34:15.564919 kubelet[3021]: I0417 23:34:15.563975 3021 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:34:15.564919 kubelet[3021]: I0417 23:34:15.564224 3021 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:34:15.565301 kubelet[3021]: I0417 23:34:15.564243 3021 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:34:15.565301 kubelet[3021]: I0417 23:34:15.564574 3021 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:34:15.578110 kubelet[3021]: I0417 23:34:15.578075 3021 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:34:15.578301 kubelet[3021]: I0417 23:34:15.578280 3021 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:34:15.579203 kubelet[3021]: I0417 23:34:15.579181 3021 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:34:15.581681 kubelet[3021]: I0417 23:34:15.581656 3021 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:34:15.586704 kubelet[3021]: E0417 23:34:15.586630 3021 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-159&limit=500&resourceVersion=0\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:34:15.588775 kubelet[3021]: E0417 23:34:15.588710 3021 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:34:15.588951 kubelet[3021]: I0417 23:34:15.588903 3021 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:34:15.590082 kubelet[3021]: I0417 23:34:15.590029 3021 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:34:15.590329 kubelet[3021]: W0417 23:34:15.590288 3021 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:34:15.596948 kubelet[3021]: I0417 23:34:15.596451 3021 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:34:15.596948 kubelet[3021]: I0417 23:34:15.596519 3021 server.go:1289] "Started kubelet" Apr 17 23:34:15.599403 kubelet[3021]: I0417 23:34:15.599337 3021 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:34:15.605713 kubelet[3021]: I0417 23:34:15.604832 3021 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:34:15.605713 kubelet[3021]: I0417 23:34:15.605465 3021 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:34:15.609915 kubelet[3021]: I0417 23:34:15.608980 3021 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:34:15.616287 kubelet[3021]: I0417 23:34:15.616224 3021 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:34:15.618904 kubelet[3021]: E0417 23:34:15.616494 3021 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.159:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.159:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-159.18a74903145f7d0a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-159,UID:ip-172-31-22-159,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-159,},FirstTimestamp:2026-04-17 23:34:15.596481802 +0000 UTC m=+2.149556268,LastTimestamp:2026-04-17 23:34:15.596481802 +0000 UTC m=+2.149556268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-159,}" Apr 17 23:34:15.620083 kubelet[3021]: I0417 23:34:15.620023 3021 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:34:15.623400 kubelet[3021]: I0417 23:34:15.623343 3021 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:34:15.623758 kubelet[3021]: E0417 23:34:15.623709 3021 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-159\" not found" Apr 17 23:34:15.624844 kubelet[3021]: I0417 23:34:15.624794 3021 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:34:15.624997 kubelet[3021]: I0417 23:34:15.624936 3021 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:34:15.627734 kubelet[3021]: E0417 23:34:15.627662 3021 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:34:15.627978 kubelet[3021]: E0417 23:34:15.627817 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-159?timeout=10s\": dial tcp 172.31.22.159:6443: connect: connection refused" interval="200ms" Apr 17 23:34:15.628216 kubelet[3021]: I0417 23:34:15.628168 3021 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:34:15.628672 kubelet[3021]: I0417 23:34:15.628386 3021 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:34:15.630289 kubelet[3021]: I0417 23:34:15.630246 3021 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:34:15.637400 kubelet[3021]: E0417 23:34:15.637250 3021 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:34:15.681189 kubelet[3021]: I0417 23:34:15.681013 3021 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:34:15.681953 kubelet[3021]: I0417 23:34:15.681914 3021 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:34:15.681953 kubelet[3021]: I0417 23:34:15.681948 3021 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:34:15.682130 kubelet[3021]: I0417 23:34:15.681978 3021 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:34:15.685759 kubelet[3021]: I0417 23:34:15.685699 3021 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:34:15.685759 kubelet[3021]: I0417 23:34:15.685756 3021 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:34:15.686141 kubelet[3021]: I0417 23:34:15.685809 3021 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:34:15.686141 kubelet[3021]: I0417 23:34:15.685825 3021 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:34:15.687015 kubelet[3021]: E0417 23:34:15.686539 3021 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:34:15.687410 kubelet[3021]: E0417 23:34:15.687350 3021 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:34:15.687557 kubelet[3021]: I0417 23:34:15.687524 3021 policy_none.go:49] "None policy: Start" Apr 17 23:34:15.687648 kubelet[3021]: I0417 23:34:15.687558 3021 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:34:15.687648 kubelet[3021]: I0417 23:34:15.687595 3021 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:34:15.710263 kubelet[3021]: E0417 23:34:15.710009 3021 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:34:15.710415 kubelet[3021]: I0417 23:34:15.710309 3021 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:34:15.710415 kubelet[3021]: I0417 23:34:15.710333 3021 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:34:15.713538 kubelet[3021]: I0417 23:34:15.713390 3021 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:34:15.715792 kubelet[3021]: E0417 23:34:15.715597 3021 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:34:15.715792 kubelet[3021]: E0417 23:34:15.715680 3021 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-159\" not found" Apr 17 23:34:15.802612 kubelet[3021]: E0417 23:34:15.802308 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:15.812456 kubelet[3021]: E0417 23:34:15.812027 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:15.812456 kubelet[3021]: E0417 23:34:15.812195 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:15.815750 kubelet[3021]: I0417 23:34:15.815621 3021 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-159" Apr 17 23:34:15.816270 kubelet[3021]: E0417 23:34:15.816178 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.159:6443/api/v1/nodes\": dial tcp 172.31.22.159:6443: connect: connection refused" node="ip-172-31-22-159" Apr 17 23:34:15.829182 kubelet[3021]: E0417 23:34:15.829131 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-159?timeout=10s\": dial tcp 172.31.22.159:6443: connect: connection refused" interval="400ms" Apr 17 23:34:15.926064 kubelet[3021]: I0417 23:34:15.925825 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:15.926064 kubelet[3021]: I0417 23:34:15.925910 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/980082610900604c793db42fef4ea36a-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-159\" (UID: \"980082610900604c793db42fef4ea36a\") " pod="kube-system/kube-scheduler-ip-172-31-22-159" Apr 17 23:34:15.926064 kubelet[3021]: I0417 23:34:15.925977 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43afd565412d02ede4608c83bca7b21f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-159\" (UID: \"43afd565412d02ede4608c83bca7b21f\") " pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:15.926064 kubelet[3021]: I0417 23:34:15.926035 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:15.926372 kubelet[3021]: I0417 23:34:15.926084 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:15.926372 kubelet[3021]: I0417 23:34:15.926137 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:15.926372 kubelet[3021]: I0417 23:34:15.926182 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:15.926372 kubelet[3021]: I0417 23:34:15.926220 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43afd565412d02ede4608c83bca7b21f-ca-certs\") pod \"kube-apiserver-ip-172-31-22-159\" (UID: \"43afd565412d02ede4608c83bca7b21f\") " pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:15.926372 kubelet[3021]: I0417 23:34:15.926266 3021 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43afd565412d02ede4608c83bca7b21f-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-159\" (UID: \"43afd565412d02ede4608c83bca7b21f\") " pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:16.019835 kubelet[3021]: I0417 23:34:16.019379 3021 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-159" Apr 17 23:34:16.020087 kubelet[3021]: E0417 23:34:16.020026 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.159:6443/api/v1/nodes\": dial tcp 172.31.22.159:6443: connect: connection refused" node="ip-172-31-22-159" Apr 17 23:34:16.105328 containerd[2136]: time="2026-04-17T23:34:16.105103929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-159,Uid:43afd565412d02ede4608c83bca7b21f,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:16.114319 containerd[2136]: time="2026-04-17T23:34:16.113960781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-159,Uid:a93c335e76c54faeca5b3bd0eaed5e7a,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:16.114319 containerd[2136]: time="2026-04-17T23:34:16.114053577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-159,Uid:980082610900604c793db42fef4ea36a,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:16.231118 kubelet[3021]: E0417 23:34:16.231050 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-159?timeout=10s\": dial tcp 172.31.22.159:6443: connect: connection refused" interval="800ms" Apr 17 23:34:16.423317 kubelet[3021]: I0417 23:34:16.422270 3021 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-159" Apr 17 23:34:16.423317 kubelet[3021]: E0417 23:34:16.422759 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.159:6443/api/v1/nodes\": dial tcp 172.31.22.159:6443: connect: connection refused" node="ip-172-31-22-159" Apr 17 23:34:16.636394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409640687.mount: Deactivated successfully. Apr 17 23:34:16.647618 containerd[2136]: time="2026-04-17T23:34:16.647538731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:16.654680 containerd[2136]: time="2026-04-17T23:34:16.654607799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Apr 17 23:34:16.656457 containerd[2136]: time="2026-04-17T23:34:16.656393699Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:16.659356 containerd[2136]: time="2026-04-17T23:34:16.659295323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:34:16.662156 containerd[2136]: time="2026-04-17T23:34:16.662076467Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:16.665193 containerd[2136]: time="2026-04-17T23:34:16.665122835Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:16.666631 containerd[2136]: time="2026-04-17T23:34:16.666564191Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:34:16.668558 containerd[2136]: time="2026-04-17T23:34:16.668435999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:34:16.673456 containerd[2136]: time="2026-04-17T23:34:16.673015271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.779414ms" Apr 17 23:34:16.677532 containerd[2136]: time="2026-04-17T23:34:16.677458079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.375306ms" Apr 17 23:34:16.685919 containerd[2136]: time="2026-04-17T23:34:16.685824816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.665603ms" Apr 17 23:34:16.884545 kubelet[3021]: E0417 23:34:16.884482 3021 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.22.159:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:34:16.891745 containerd[2136]: time="2026-04-17T23:34:16.890661325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:16.891745 containerd[2136]: time="2026-04-17T23:34:16.890760565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:16.891745 containerd[2136]: time="2026-04-17T23:34:16.890797561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:16.891745 containerd[2136]: time="2026-04-17T23:34:16.890988445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:16.901323 containerd[2136]: time="2026-04-17T23:34:16.900477565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:16.901323 containerd[2136]: time="2026-04-17T23:34:16.900597769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:16.901323 containerd[2136]: time="2026-04-17T23:34:16.900640501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:16.901323 containerd[2136]: time="2026-04-17T23:34:16.900816577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:16.902572 containerd[2136]: time="2026-04-17T23:34:16.902220073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:16.902572 containerd[2136]: time="2026-04-17T23:34:16.902313973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:16.902572 containerd[2136]: time="2026-04-17T23:34:16.902352529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:16.903345 containerd[2136]: time="2026-04-17T23:34:16.903173989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:16.953184 kubelet[3021]: E0417 23:34:16.950638 3021 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.22.159:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-159&limit=500&resourceVersion=0\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:34:17.034396 kubelet[3021]: E0417 23:34:17.034289 3021 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-159?timeout=10s\": dial tcp 172.31.22.159:6443: connect: connection refused" interval="1.6s" Apr 17 23:34:17.047868 kubelet[3021]: E0417 23:34:17.047576 3021 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.22.159:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:34:17.061580 kubelet[3021]: E0417 23:34:17.061487 3021 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.22.159:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.159:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:34:17.071641 containerd[2136]: time="2026-04-17T23:34:17.071441697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-159,Uid:980082610900604c793db42fef4ea36a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddf7f49b2e948a8818774a9611066ce29effdb63538cd6927b2164d1cd3cb03e\"" Apr 17 23:34:17.075340 containerd[2136]: time="2026-04-17T23:34:17.075264561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-159,Uid:43afd565412d02ede4608c83bca7b21f,Namespace:kube-system,Attempt:0,} returns sandbox id \"417f6b4b246a783e6d1dd47c8393285c3a868082fbd8d6774b777a0d290951c7\"" Apr 17 23:34:17.083591 containerd[2136]: time="2026-04-17T23:34:17.083513613Z" level=info msg="CreateContainer within sandbox \"ddf7f49b2e948a8818774a9611066ce29effdb63538cd6927b2164d1cd3cb03e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:34:17.090476 containerd[2136]: time="2026-04-17T23:34:17.090198358Z" level=info msg="CreateContainer within sandbox \"417f6b4b246a783e6d1dd47c8393285c3a868082fbd8d6774b777a0d290951c7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:34:17.098406 containerd[2136]: time="2026-04-17T23:34:17.098311186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-159,Uid:a93c335e76c54faeca5b3bd0eaed5e7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b15d6307159a8ccb8dbd96f23f68683b41178256ccd42c75014ba499e1be14f\"" Apr 17 23:34:17.110952 containerd[2136]: time="2026-04-17T23:34:17.110636182Z" level=info msg="CreateContainer within sandbox \"7b15d6307159a8ccb8dbd96f23f68683b41178256ccd42c75014ba499e1be14f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:34:17.125688 containerd[2136]: time="2026-04-17T23:34:17.124725754Z" level=info msg="CreateContainer within sandbox \"ddf7f49b2e948a8818774a9611066ce29effdb63538cd6927b2164d1cd3cb03e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6eed672032fadd544756de16aa6d90e7657ebb6b596e2768f3331ac3d18e3e96\"" Apr 17 23:34:17.126285 containerd[2136]: time="2026-04-17T23:34:17.126224350Z" level=info msg="StartContainer for \"6eed672032fadd544756de16aa6d90e7657ebb6b596e2768f3331ac3d18e3e96\"" Apr 17 23:34:17.136099 containerd[2136]: time="2026-04-17T23:34:17.136012198Z" level=info msg="CreateContainer within sandbox \"417f6b4b246a783e6d1dd47c8393285c3a868082fbd8d6774b777a0d290951c7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9c5884c5d5668727255faac98e882a4f76203361531b91c2465f008c93adfdea\"" Apr 17 23:34:17.137674 containerd[2136]: time="2026-04-17T23:34:17.137379730Z" level=info msg="StartContainer for \"9c5884c5d5668727255faac98e882a4f76203361531b91c2465f008c93adfdea\"" Apr 17 23:34:17.157264 containerd[2136]: time="2026-04-17T23:34:17.157164742Z" level=info msg="CreateContainer within sandbox \"7b15d6307159a8ccb8dbd96f23f68683b41178256ccd42c75014ba499e1be14f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"581a184ea8cdbb6aacb806c3c700c09a4bbac9d75f19e226b3348430f4616417\"" Apr 17 23:34:17.159656 containerd[2136]: time="2026-04-17T23:34:17.158736118Z" level=info msg="StartContainer for \"581a184ea8cdbb6aacb806c3c700c09a4bbac9d75f19e226b3348430f4616417\"" Apr 17 23:34:17.228682 kubelet[3021]: I0417 23:34:17.228642 3021 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-159" Apr 17 23:34:17.231794 kubelet[3021]: E0417 23:34:17.231726 3021 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.22.159:6443/api/v1/nodes\": dial tcp 172.31.22.159:6443: connect: connection refused" node="ip-172-31-22-159" Apr 17 23:34:17.308956 containerd[2136]: time="2026-04-17T23:34:17.308855747Z" level=info msg="StartContainer for \"6eed672032fadd544756de16aa6d90e7657ebb6b596e2768f3331ac3d18e3e96\" returns successfully" Apr 17 23:34:17.366521 containerd[2136]: time="2026-04-17T23:34:17.365736707Z" level=info msg="StartContainer for \"9c5884c5d5668727255faac98e882a4f76203361531b91c2465f008c93adfdea\" returns successfully" Apr 17 23:34:17.398635 containerd[2136]: time="2026-04-17T23:34:17.398419895Z" level=info msg="StartContainer for \"581a184ea8cdbb6aacb806c3c700c09a4bbac9d75f19e226b3348430f4616417\" returns successfully" Apr 17 23:34:17.713825 kubelet[3021]: E0417 23:34:17.710955 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:17.718895 kubelet[3021]: E0417 23:34:17.717499 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:17.724923 kubelet[3021]: E0417 23:34:17.724342 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:18.732950 kubelet[3021]: E0417 23:34:18.731655 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:18.732950 kubelet[3021]: E0417 23:34:18.731842 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:18.840940 kubelet[3021]: I0417 23:34:18.837554 3021 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-159" Apr 17 23:34:19.734922 kubelet[3021]: E0417 23:34:19.733500 3021 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:20.119666 kubelet[3021]: E0417 23:34:20.119520 3021 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-159\" not found" node="ip-172-31-22-159" Apr 17 23:34:20.226150 update_engine[2105]: I20260417 23:34:20.223928 2105 update_attempter.cc:509] Updating boot flags... Apr 17 23:34:20.229943 kubelet[3021]: I0417 23:34:20.228523 3021 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-159" Apr 17 23:34:20.229943 kubelet[3021]: I0417 23:34:20.228730 3021 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:20.274322 kubelet[3021]: E0417 23:34:20.273096 3021 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-22-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:20.274322 kubelet[3021]: I0417 23:34:20.273147 3021 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:20.284939 kubelet[3021]: E0417 23:34:20.281221 3021 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-22-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:20.284939 kubelet[3021]: I0417 23:34:20.281281 3021 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-159" Apr 17 23:34:20.287924 kubelet[3021]: E0417 23:34:20.287453 3021 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-22-159\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-22-159" Apr 17 23:34:20.453908 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3316) Apr 17 23:34:20.591118 kubelet[3021]: I0417 23:34:20.591081 3021 apiserver.go:52] "Watching apiserver" Apr 17 23:34:20.628898 kubelet[3021]: I0417 23:34:20.627019 3021 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:34:21.108901 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3307) Apr 17 23:34:22.939001 systemd[1]: Reloading requested from client PID 3487 ('systemctl') (unit session-7.scope)... Apr 17 23:34:22.939033 systemd[1]: Reloading... Apr 17 23:34:23.089031 zram_generator::config[3527]: No configuration found. Apr 17 23:34:23.363791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:34:23.563947 systemd[1]: Reloading finished in 624 ms. Apr 17 23:34:23.625356 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:23.634578 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:34:23.635480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:23.649480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:34:24.023298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:34:24.038420 (kubelet)[3597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:34:24.210216 kubelet[3597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:34:24.210216 kubelet[3597]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:34:24.210216 kubelet[3597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:34:24.210216 kubelet[3597]: I0417 23:34:24.202747 3597 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:34:24.233729 kubelet[3597]: I0417 23:34:24.231320 3597 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:34:24.233729 kubelet[3597]: I0417 23:34:24.232689 3597 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:34:24.233959 kubelet[3597]: I0417 23:34:24.233750 3597 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:34:24.239717 kubelet[3597]: I0417 23:34:24.238912 3597 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:34:24.250927 kubelet[3597]: I0417 23:34:24.250563 3597 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:34:24.266181 sudo[3612]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 23:34:24.267829 sudo[3612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 23:34:24.271301 kubelet[3597]: E0417 23:34:24.267002 3597 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:34:24.271301 kubelet[3597]: I0417 23:34:24.267767 3597 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:34:24.288998 kubelet[3597]: I0417 23:34:24.288837 3597 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:34:24.289864 kubelet[3597]: I0417 23:34:24.289750 3597 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:34:24.290226 kubelet[3597]: I0417 23:34:24.289807 3597 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-159","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:34:24.290226 kubelet[3597]: I0417 23:34:24.290138 3597 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:34:24.290226 kubelet[3597]: I0417 23:34:24.290157 3597 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:34:24.291356 kubelet[3597]: I0417 23:34:24.290243 3597 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:34:24.291356 kubelet[3597]: I0417 23:34:24.290501 3597 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:34:24.291356 kubelet[3597]: I0417 23:34:24.290533 3597 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:34:24.291356 kubelet[3597]: I0417 23:34:24.290580 3597 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:34:24.291356 kubelet[3597]: I0417 23:34:24.290608 3597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:34:24.304903 kubelet[3597]: I0417 23:34:24.303307 3597 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:34:24.304903 kubelet[3597]: I0417 23:34:24.304324 3597 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:34:24.309211 kubelet[3597]: I0417 23:34:24.309181 3597 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:34:24.309635 kubelet[3597]: I0417 23:34:24.309613 3597 server.go:1289] "Started kubelet" Apr 17 23:34:24.314502 kubelet[3597]: I0417 23:34:24.314444 3597 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:34:24.318921 kubelet[3597]: I0417 23:34:24.318838 3597 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:34:24.336233 kubelet[3597]: I0417 23:34:24.336130 3597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:34:24.365727 kubelet[3597]: I0417 23:34:24.363942 3597 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:34:24.365727 kubelet[3597]: I0417 23:34:24.364395 3597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:34:24.384321 kubelet[3597]: I0417 23:34:24.377476 3597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:34:24.395006 kubelet[3597]: I0417 23:34:24.391839 3597 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:34:24.395006 kubelet[3597]: E0417 23:34:24.392235 3597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-22-159\" not found" Apr 17 23:34:24.395006 kubelet[3597]: I0417 23:34:24.393333 3597 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:34:24.395006 kubelet[3597]: I0417 23:34:24.393569 3597 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:34:24.411068 kubelet[3597]: E0417 23:34:24.409959 3597 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:34:24.412653 kubelet[3597]: I0417 23:34:24.412600 3597 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:34:24.412653 kubelet[3597]: I0417 23:34:24.412660 3597 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:34:24.412841 kubelet[3597]: I0417 23:34:24.412800 3597 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:34:24.495766 kubelet[3597]: I0417 23:34:24.495379 3597 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:34:24.508095 kubelet[3597]: I0417 23:34:24.508039 3597 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:34:24.508246 kubelet[3597]: I0417 23:34:24.508122 3597 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:34:24.508246 kubelet[3597]: I0417 23:34:24.508164 3597 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:34:24.508246 kubelet[3597]: I0417 23:34:24.508213 3597 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:34:24.508403 kubelet[3597]: E0417 23:34:24.508312 3597 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.606603 3597 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.606645 3597 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.606681 3597 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.607143 3597 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.607166 3597 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.607241 3597 policy_none.go:49] "None policy: Start" Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.607259 3597 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.607281 3597 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:34:24.607571 kubelet[3597]: I0417 23:34:24.607452 3597 state_mem.go:75] "Updated machine memory state" Apr 17 23:34:24.609446 kubelet[3597]: E0417 23:34:24.609399 3597 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 17 23:34:24.611974 kubelet[3597]: E0417 23:34:24.611928 3597 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:34:24.612244 kubelet[3597]: I0417 23:34:24.612209 3597 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:34:24.612310 kubelet[3597]: I0417 23:34:24.612242 3597 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:34:24.616582 kubelet[3597]: I0417 23:34:24.616515 3597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:34:24.621652 kubelet[3597]: E0417 23:34:24.620840 3597 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:34:24.731590 kubelet[3597]: I0417 23:34:24.731543 3597 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-22-159" Apr 17 23:34:24.753965 kubelet[3597]: I0417 23:34:24.752768 3597 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-22-159" Apr 17 23:34:24.754107 kubelet[3597]: I0417 23:34:24.754063 3597 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-22-159" Apr 17 23:34:24.810507 kubelet[3597]: I0417 23:34:24.810430 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:24.812448 kubelet[3597]: I0417 23:34:24.811382 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:24.812448 kubelet[3597]: I0417 23:34:24.812082 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-22-159" Apr 17 23:34:24.897435 kubelet[3597]: I0417 23:34:24.897297 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43afd565412d02ede4608c83bca7b21f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-159\" (UID: \"43afd565412d02ede4608c83bca7b21f\") " pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:24.897435 kubelet[3597]: I0417 23:34:24.897376 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:24.897435 kubelet[3597]: I0417 23:34:24.897418 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/980082610900604c793db42fef4ea36a-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-159\" (UID: \"980082610900604c793db42fef4ea36a\") " pod="kube-system/kube-scheduler-ip-172-31-22-159" Apr 17 23:34:24.897680 kubelet[3597]: I0417 23:34:24.897452 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43afd565412d02ede4608c83bca7b21f-ca-certs\") pod \"kube-apiserver-ip-172-31-22-159\" (UID: \"43afd565412d02ede4608c83bca7b21f\") " pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:24.897680 kubelet[3597]: I0417 23:34:24.897491 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:24.897680 kubelet[3597]: I0417 23:34:24.897526 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:24.897680 kubelet[3597]: I0417 23:34:24.897560 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:24.897680 kubelet[3597]: I0417 23:34:24.897595 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a93c335e76c54faeca5b3bd0eaed5e7a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-159\" (UID: \"a93c335e76c54faeca5b3bd0eaed5e7a\") " pod="kube-system/kube-controller-manager-ip-172-31-22-159" Apr 17 23:34:24.899443 kubelet[3597]: I0417 23:34:24.897637 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43afd565412d02ede4608c83bca7b21f-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-159\" (UID: \"43afd565412d02ede4608c83bca7b21f\") " pod="kube-system/kube-apiserver-ip-172-31-22-159" Apr 17 23:34:25.295955 kubelet[3597]: I0417 23:34:25.295810 3597 apiserver.go:52] "Watching apiserver" Apr 17 23:34:25.303263 sudo[3612]: pam_unix(sudo:session): session closed for user root Apr 17 23:34:25.395925 kubelet[3597]: I0417 23:34:25.393722 3597 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:34:25.412531 kubelet[3597]: I0417 23:34:25.412266 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-159" podStartSLOduration=1.412243459 podStartE2EDuration="1.412243459s" podCreationTimestamp="2026-04-17 23:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:25.390322891 +0000 UTC m=+1.332456200" watchObservedRunningTime="2026-04-17 23:34:25.412243459 +0000 UTC m=+1.354376684" Apr 17 23:34:25.417137 kubelet[3597]: I0417 23:34:25.413661 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-159" podStartSLOduration=1.4136331549999999 podStartE2EDuration="1.413633155s" podCreationTimestamp="2026-04-17 23:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:25.406360567 +0000 UTC m=+1.348493816" watchObservedRunningTime="2026-04-17 23:34:25.413633155 +0000 UTC m=+1.355766416" Apr 17 23:34:25.456913 kubelet[3597]: I0417 23:34:25.456672 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-159" podStartSLOduration=1.456649723 podStartE2EDuration="1.456649723s" podCreationTimestamp="2026-04-17 23:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:25.434850235 +0000 UTC m=+1.376983460" watchObservedRunningTime="2026-04-17 23:34:25.456649723 +0000 UTC m=+1.398782948" Apr 17 23:34:27.882747 sudo[2477]: pam_unix(sudo:session): session closed for user root Apr 17 23:34:28.051188 sshd[2473]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:28.058146 systemd-logind[2097]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:34:28.059385 systemd[1]: sshd@6-172.31.22.159:22-4.175.71.9:40778.service: Deactivated successfully. Apr 17 23:34:28.066671 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:34:28.070250 systemd-logind[2097]: Removed session 7. Apr 17 23:34:28.880152 kubelet[3597]: I0417 23:34:28.879964 3597 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:34:28.886652 containerd[2136]: time="2026-04-17T23:34:28.883381032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:34:28.890118 kubelet[3597]: I0417 23:34:28.884413 3597 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:34:29.729393 kubelet[3597]: I0417 23:34:29.727486 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6a5d83a-8f33-4cb0-b34f-18f9925a1412-xtables-lock\") pod \"kube-proxy-n4zlf\" (UID: \"d6a5d83a-8f33-4cb0-b34f-18f9925a1412\") " pod="kube-system/kube-proxy-n4zlf" Apr 17 23:34:29.729393 kubelet[3597]: I0417 23:34:29.727553 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb6st\" (UniqueName: \"kubernetes.io/projected/d6a5d83a-8f33-4cb0-b34f-18f9925a1412-kube-api-access-gb6st\") pod \"kube-proxy-n4zlf\" (UID: \"d6a5d83a-8f33-4cb0-b34f-18f9925a1412\") " pod="kube-system/kube-proxy-n4zlf" Apr 17 23:34:29.729393 kubelet[3597]: I0417 23:34:29.727609 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6a5d83a-8f33-4cb0-b34f-18f9925a1412-lib-modules\") pod \"kube-proxy-n4zlf\" (UID: \"d6a5d83a-8f33-4cb0-b34f-18f9925a1412\") " pod="kube-system/kube-proxy-n4zlf" Apr 17 23:34:29.729393 kubelet[3597]: I0417 23:34:29.727652 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6a5d83a-8f33-4cb0-b34f-18f9925a1412-kube-proxy\") pod \"kube-proxy-n4zlf\" (UID: \"d6a5d83a-8f33-4cb0-b34f-18f9925a1412\") " pod="kube-system/kube-proxy-n4zlf" Apr 17 23:34:29.902908 kubelet[3597]: E0417 23:34:29.900752 3597 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 17 23:34:29.902908 kubelet[3597]: E0417 23:34:29.900824 3597 projected.go:194] Error preparing data for projected volume kube-api-access-gb6st for pod kube-system/kube-proxy-n4zlf: configmap "kube-root-ca.crt" not found Apr 17 23:34:29.902908 kubelet[3597]: E0417 23:34:29.900986 3597 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6a5d83a-8f33-4cb0-b34f-18f9925a1412-kube-api-access-gb6st podName:d6a5d83a-8f33-4cb0-b34f-18f9925a1412 nodeName:}" failed. No retries permitted until 2026-04-17 23:34:30.400949621 +0000 UTC m=+6.343082834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gb6st" (UniqueName: "kubernetes.io/projected/d6a5d83a-8f33-4cb0-b34f-18f9925a1412-kube-api-access-gb6st") pod "kube-proxy-n4zlf" (UID: "d6a5d83a-8f33-4cb0-b34f-18f9925a1412") : configmap "kube-root-ca.crt" not found Apr 17 23:34:29.932241 kubelet[3597]: I0417 23:34:29.930200 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-hostproc\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932241 kubelet[3597]: I0417 23:34:29.930270 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-host-proc-sys-net\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932241 kubelet[3597]: I0417 23:34:29.931121 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-etc-cni-netd\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932241 kubelet[3597]: I0417 23:34:29.931245 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-xtables-lock\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932241 kubelet[3597]: I0417 23:34:29.931342 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cff77200-3dee-474d-9e4f-bc525ef22bad-clustermesh-secrets\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932241 kubelet[3597]: I0417 23:34:29.931429 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-run\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932702 kubelet[3597]: I0417 23:34:29.931497 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cni-path\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932702 kubelet[3597]: I0417 23:34:29.931534 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-config-path\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932702 kubelet[3597]: I0417 23:34:29.931598 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cff77200-3dee-474d-9e4f-bc525ef22bad-hubble-tls\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932702 kubelet[3597]: I0417 23:34:29.931666 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-bpf-maps\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932702 kubelet[3597]: I0417 23:34:29.931703 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-cgroup\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.932702 kubelet[3597]: I0417 23:34:29.931942 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-lib-modules\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.933045 kubelet[3597]: I0417 23:34:29.931992 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-host-proc-sys-kernel\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:29.933045 kubelet[3597]: I0417 23:34:29.932032 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jxjl\" (UniqueName: \"kubernetes.io/projected/cff77200-3dee-474d-9e4f-bc525ef22bad-kube-api-access-5jxjl\") pod \"cilium-7grst\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " pod="kube-system/cilium-7grst" Apr 17 23:34:30.334809 kubelet[3597]: I0417 23:34:30.334741 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77421517-915e-4faa-98c4-1ef7a0fff6fb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pjjrv\" (UID: \"77421517-915e-4faa-98c4-1ef7a0fff6fb\") " pod="kube-system/cilium-operator-6c4d7847fc-pjjrv" Apr 17 23:34:30.334999 kubelet[3597]: I0417 23:34:30.334838 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm6gx\" (UniqueName: \"kubernetes.io/projected/77421517-915e-4faa-98c4-1ef7a0fff6fb-kube-api-access-mm6gx\") pod \"cilium-operator-6c4d7847fc-pjjrv\" (UID: \"77421517-915e-4faa-98c4-1ef7a0fff6fb\") " pod="kube-system/cilium-operator-6c4d7847fc-pjjrv" Apr 17 23:34:30.354385 containerd[2136]: time="2026-04-17T23:34:30.354312719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7grst,Uid:cff77200-3dee-474d-9e4f-bc525ef22bad,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:30.396138 containerd[2136]: time="2026-04-17T23:34:30.395805096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:30.396438 containerd[2136]: time="2026-04-17T23:34:30.396207024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:30.396606 containerd[2136]: time="2026-04-17T23:34:30.396407112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:30.397657 containerd[2136]: time="2026-04-17T23:34:30.397561788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:30.477314 containerd[2136]: time="2026-04-17T23:34:30.477242028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pjjrv,Uid:77421517-915e-4faa-98c4-1ef7a0fff6fb,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:30.479371 containerd[2136]: time="2026-04-17T23:34:30.479297724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7grst,Uid:cff77200-3dee-474d-9e4f-bc525ef22bad,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\"" Apr 17 23:34:30.482788 containerd[2136]: time="2026-04-17T23:34:30.482716692Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 23:34:30.528201 containerd[2136]: time="2026-04-17T23:34:30.527664612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:30.528201 containerd[2136]: time="2026-04-17T23:34:30.527762784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:30.528201 containerd[2136]: time="2026-04-17T23:34:30.527787960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:30.528201 containerd[2136]: time="2026-04-17T23:34:30.527991984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:30.605054 containerd[2136]: time="2026-04-17T23:34:30.604219825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n4zlf,Uid:d6a5d83a-8f33-4cb0-b34f-18f9925a1412,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:30.624058 containerd[2136]: time="2026-04-17T23:34:30.624009265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pjjrv,Uid:77421517-915e-4faa-98c4-1ef7a0fff6fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\"" Apr 17 23:34:30.652017 containerd[2136]: time="2026-04-17T23:34:30.651631225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:30.652017 containerd[2136]: time="2026-04-17T23:34:30.651743893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:30.652017 containerd[2136]: time="2026-04-17T23:34:30.651820825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:30.652489 containerd[2136]: time="2026-04-17T23:34:30.652244737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:30.724185 containerd[2136]: time="2026-04-17T23:34:30.723914725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n4zlf,Uid:d6a5d83a-8f33-4cb0-b34f-18f9925a1412,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a7619aa4b3cdd40942deae158c74776ca93ba9965d82c34f11f76c54f23c876\"" Apr 17 23:34:30.732809 containerd[2136]: time="2026-04-17T23:34:30.732735949Z" level=info msg="CreateContainer within sandbox \"1a7619aa4b3cdd40942deae158c74776ca93ba9965d82c34f11f76c54f23c876\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:34:30.760835 containerd[2136]: time="2026-04-17T23:34:30.760699777Z" level=info msg="CreateContainer within sandbox \"1a7619aa4b3cdd40942deae158c74776ca93ba9965d82c34f11f76c54f23c876\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b78e22842a9b73db2f54c76481974b4e8cd6cf734505d8e83114b88a3c9a667\"" Apr 17 23:34:30.765057 containerd[2136]: time="2026-04-17T23:34:30.762594853Z" level=info msg="StartContainer for \"2b78e22842a9b73db2f54c76481974b4e8cd6cf734505d8e83114b88a3c9a667\"" Apr 17 23:34:30.865190 containerd[2136]: time="2026-04-17T23:34:30.865032422Z" level=info msg="StartContainer for \"2b78e22842a9b73db2f54c76481974b4e8cd6cf734505d8e83114b88a3c9a667\" returns successfully" Apr 17 23:34:31.716580 kubelet[3597]: I0417 23:34:31.716054 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n4zlf" podStartSLOduration=2.716031794 podStartE2EDuration="2.716031794s" podCreationTimestamp="2026-04-17 23:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:31.715936454 +0000 UTC m=+7.658069715" watchObservedRunningTime="2026-04-17 23:34:31.716031794 +0000 UTC m=+7.658165019" Apr 17 23:34:35.937704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186977537.mount: Deactivated successfully. Apr 17 23:34:38.534143 containerd[2136]: time="2026-04-17T23:34:38.534069464Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:38.536006 containerd[2136]: time="2026-04-17T23:34:38.535950908Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 17 23:34:38.537615 containerd[2136]: time="2026-04-17T23:34:38.536858924Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:38.540396 containerd[2136]: time="2026-04-17T23:34:38.540326528Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.05754068s" Apr 17 23:34:38.540547 containerd[2136]: time="2026-04-17T23:34:38.540393668Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 17 23:34:38.546008 containerd[2136]: time="2026-04-17T23:34:38.545926796Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 23:34:38.551576 containerd[2136]: time="2026-04-17T23:34:38.551376272Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:34:38.576072 containerd[2136]: time="2026-04-17T23:34:38.575865956Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\"" Apr 17 23:34:38.579376 containerd[2136]: time="2026-04-17T23:34:38.577312376Z" level=info msg="StartContainer for \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\"" Apr 17 23:34:38.683860 containerd[2136]: time="2026-04-17T23:34:38.683739837Z" level=info msg="StartContainer for \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\" returns successfully" Apr 17 23:34:39.562753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b-rootfs.mount: Deactivated successfully. Apr 17 23:34:39.937529 containerd[2136]: time="2026-04-17T23:34:39.937093055Z" level=info msg="shim disconnected" id=2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b namespace=k8s.io Apr 17 23:34:39.937529 containerd[2136]: time="2026-04-17T23:34:39.937166687Z" level=warning msg="cleaning up after shim disconnected" id=2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b namespace=k8s.io Apr 17 23:34:39.937529 containerd[2136]: time="2026-04-17T23:34:39.937188407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:34:40.429103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount934930314.mount: Deactivated successfully. Apr 17 23:34:40.770041 containerd[2136]: time="2026-04-17T23:34:40.769797311Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:34:40.817713 containerd[2136]: time="2026-04-17T23:34:40.817249079Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\"" Apr 17 23:34:40.824321 containerd[2136]: time="2026-04-17T23:34:40.824088071Z" level=info msg="StartContainer for \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\"" Apr 17 23:34:40.964453 containerd[2136]: time="2026-04-17T23:34:40.964283580Z" level=info msg="StartContainer for \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\" returns successfully" Apr 17 23:34:40.994057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:34:40.994683 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:34:40.994812 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:34:41.008040 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:34:41.067671 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:34:41.095074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a-rootfs.mount: Deactivated successfully. Apr 17 23:34:41.192514 containerd[2136]: time="2026-04-17T23:34:41.192343953Z" level=info msg="shim disconnected" id=c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a namespace=k8s.io Apr 17 23:34:41.193009 containerd[2136]: time="2026-04-17T23:34:41.192702237Z" level=warning msg="cleaning up after shim disconnected" id=c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a namespace=k8s.io Apr 17 23:34:41.193009 containerd[2136]: time="2026-04-17T23:34:41.192731877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:34:41.297157 containerd[2136]: time="2026-04-17T23:34:41.297086326Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:41.298463 containerd[2136]: time="2026-04-17T23:34:41.298404394Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 17 23:34:41.299408 containerd[2136]: time="2026-04-17T23:34:41.299306242Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:41.302685 containerd[2136]: time="2026-04-17T23:34:41.302343130Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.756347742s" Apr 17 23:34:41.302685 containerd[2136]: time="2026-04-17T23:34:41.302409406Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 17 23:34:41.311127 containerd[2136]: time="2026-04-17T23:34:41.311044534Z" level=info msg="CreateContainer within sandbox \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 23:34:41.324255 containerd[2136]: time="2026-04-17T23:34:41.324067246Z" level=info msg="CreateContainer within sandbox \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\"" Apr 17 23:34:41.328124 containerd[2136]: time="2026-04-17T23:34:41.326547430Z" level=info msg="StartContainer for \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\"" Apr 17 23:34:41.417523 containerd[2136]: time="2026-04-17T23:34:41.417454114Z" level=info msg="StartContainer for \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\" returns successfully" Apr 17 23:34:41.799911 containerd[2136]: time="2026-04-17T23:34:41.797689008Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:34:41.854939 containerd[2136]: time="2026-04-17T23:34:41.852939973Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\"" Apr 17 23:34:41.863158 containerd[2136]: time="2026-04-17T23:34:41.861792757Z" level=info msg="StartContainer for \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\"" Apr 17 23:34:41.936772 kubelet[3597]: I0417 23:34:41.936658 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pjjrv" podStartSLOduration=1.259639752 podStartE2EDuration="11.936632101s" podCreationTimestamp="2026-04-17 23:34:30 +0000 UTC" firstStartedPulling="2026-04-17 23:34:30.626839297 +0000 UTC m=+6.568972510" lastFinishedPulling="2026-04-17 23:34:41.303831634 +0000 UTC m=+17.245964859" observedRunningTime="2026-04-17 23:34:41.934027033 +0000 UTC m=+17.876160294" watchObservedRunningTime="2026-04-17 23:34:41.936632101 +0000 UTC m=+17.878765326" Apr 17 23:34:42.143637 containerd[2136]: time="2026-04-17T23:34:42.143329894Z" level=info msg="StartContainer for \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\" returns successfully" Apr 17 23:34:42.284291 containerd[2136]: time="2026-04-17T23:34:42.284110919Z" level=info msg="shim disconnected" id=f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965 namespace=k8s.io Apr 17 23:34:42.284291 containerd[2136]: time="2026-04-17T23:34:42.284239175Z" level=warning msg="cleaning up after shim disconnected" id=f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965 namespace=k8s.io Apr 17 23:34:42.285294 containerd[2136]: time="2026-04-17T23:34:42.284262215Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:34:42.793609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965-rootfs.mount: Deactivated successfully. Apr 17 23:34:42.828908 containerd[2136]: time="2026-04-17T23:34:42.826819009Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:34:42.883331 containerd[2136]: time="2026-04-17T23:34:42.883243538Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\"" Apr 17 23:34:42.887930 containerd[2136]: time="2026-04-17T23:34:42.884324798Z" level=info msg="StartContainer for \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\"" Apr 17 23:34:43.176511 containerd[2136]: time="2026-04-17T23:34:43.176331719Z" level=info msg="StartContainer for \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\" returns successfully" Apr 17 23:34:43.243461 containerd[2136]: time="2026-04-17T23:34:43.243363659Z" level=info msg="shim disconnected" id=9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720 namespace=k8s.io Apr 17 23:34:43.243461 containerd[2136]: time="2026-04-17T23:34:43.243448151Z" level=warning msg="cleaning up after shim disconnected" id=9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720 namespace=k8s.io Apr 17 23:34:43.243785 containerd[2136]: time="2026-04-17T23:34:43.243470351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:34:43.792233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720-rootfs.mount: Deactivated successfully. Apr 17 23:34:43.819142 containerd[2136]: time="2026-04-17T23:34:43.819076838Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:34:43.852675 containerd[2136]: time="2026-04-17T23:34:43.852592634Z" level=info msg="CreateContainer within sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\"" Apr 17 23:34:43.853856 containerd[2136]: time="2026-04-17T23:34:43.853777646Z" level=info msg="StartContainer for \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\"" Apr 17 23:34:43.961404 containerd[2136]: time="2026-04-17T23:34:43.961137555Z" level=info msg="StartContainer for \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\" returns successfully" Apr 17 23:34:44.226102 kubelet[3597]: I0417 23:34:44.226005 3597 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:34:44.358863 kubelet[3597]: I0417 23:34:44.358373 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956d2\" (UniqueName: \"kubernetes.io/projected/1202caf1-d6c5-469f-923b-d7164f35a095-kube-api-access-956d2\") pod \"coredns-674b8bbfcf-gnx9h\" (UID: \"1202caf1-d6c5-469f-923b-d7164f35a095\") " pod="kube-system/coredns-674b8bbfcf-gnx9h" Apr 17 23:34:44.359705 kubelet[3597]: I0417 23:34:44.359638 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b935498-100d-4c23-9b35-990e795f2a44-config-volume\") pod \"coredns-674b8bbfcf-txdp9\" (UID: \"9b935498-100d-4c23-9b35-990e795f2a44\") " pod="kube-system/coredns-674b8bbfcf-txdp9" Apr 17 23:34:44.360240 kubelet[3597]: I0417 23:34:44.360104 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1202caf1-d6c5-469f-923b-d7164f35a095-config-volume\") pod \"coredns-674b8bbfcf-gnx9h\" (UID: \"1202caf1-d6c5-469f-923b-d7164f35a095\") " pod="kube-system/coredns-674b8bbfcf-gnx9h" Apr 17 23:34:44.360763 kubelet[3597]: I0417 23:34:44.360643 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwwn\" (UniqueName: \"kubernetes.io/projected/9b935498-100d-4c23-9b35-990e795f2a44-kube-api-access-cpwwn\") pod \"coredns-674b8bbfcf-txdp9\" (UID: \"9b935498-100d-4c23-9b35-990e795f2a44\") " pod="kube-system/coredns-674b8bbfcf-txdp9" Apr 17 23:34:44.640585 containerd[2136]: time="2026-04-17T23:34:44.639558914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gnx9h,Uid:1202caf1-d6c5-469f-923b-d7164f35a095,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:44.652365 containerd[2136]: time="2026-04-17T23:34:44.651998150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txdp9,Uid:9b935498-100d-4c23-9b35-990e795f2a44,Namespace:kube-system,Attempt:0,}" Apr 17 23:34:46.913979 (udev-worker)[4397]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:46.916407 systemd-networkd[1686]: cilium_host: Link UP Apr 17 23:34:46.916742 systemd-networkd[1686]: cilium_net: Link UP Apr 17 23:34:46.918499 systemd-networkd[1686]: cilium_net: Gained carrier Apr 17 23:34:46.919076 systemd-networkd[1686]: cilium_host: Gained carrier Apr 17 23:34:46.925955 (udev-worker)[4436]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:46.971641 systemd-networkd[1686]: cilium_net: Gained IPv6LL Apr 17 23:34:47.086431 (udev-worker)[4449]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:47.098590 systemd-networkd[1686]: cilium_vxlan: Link UP Apr 17 23:34:47.098605 systemd-networkd[1686]: cilium_vxlan: Gained carrier Apr 17 23:34:47.669979 kernel: NET: Registered PF_ALG protocol family Apr 17 23:34:47.803016 systemd-networkd[1686]: cilium_host: Gained IPv6LL Apr 17 23:34:48.443991 systemd-networkd[1686]: cilium_vxlan: Gained IPv6LL Apr 17 23:34:49.012306 systemd-networkd[1686]: lxc_health: Link UP Apr 17 23:34:49.015113 (udev-worker)[4399]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:34:49.022109 systemd-networkd[1686]: lxc_health: Gained carrier Apr 17 23:34:49.466563 systemd-networkd[1686]: lxc327f5aa609e7: Link UP Apr 17 23:34:49.476022 kernel: eth0: renamed from tmp12232 Apr 17 23:34:49.488685 systemd-networkd[1686]: lxc58ff9a2fa3cd: Link UP Apr 17 23:34:49.496342 systemd-networkd[1686]: lxc327f5aa609e7: Gained carrier Apr 17 23:34:49.509236 kernel: eth0: renamed from tmp9c27a Apr 17 23:34:49.525253 systemd-networkd[1686]: lxc58ff9a2fa3cd: Gained carrier Apr 17 23:34:50.402901 kubelet[3597]: I0417 23:34:50.400138 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7grst" podStartSLOduration=13.339148523 podStartE2EDuration="21.400117483s" podCreationTimestamp="2026-04-17 23:34:29 +0000 UTC" firstStartedPulling="2026-04-17 23:34:30.482107488 +0000 UTC m=+6.424240701" lastFinishedPulling="2026-04-17 23:34:38.543076436 +0000 UTC m=+14.485209661" observedRunningTime="2026-04-17 23:34:44.96760366 +0000 UTC m=+20.909736909" watchObservedRunningTime="2026-04-17 23:34:50.400117483 +0000 UTC m=+26.342250708" Apr 17 23:34:50.938173 systemd-networkd[1686]: lxc_health: Gained IPv6LL Apr 17 23:34:51.130150 systemd-networkd[1686]: lxc58ff9a2fa3cd: Gained IPv6LL Apr 17 23:34:51.323152 systemd-networkd[1686]: lxc327f5aa609e7: Gained IPv6LL Apr 17 23:34:54.250312 ntpd[2083]: Listen normally on 6 cilium_host 192.168.0.161:123 Apr 17 23:34:54.254071 ntpd[2083]: 17 Apr 23:34:54 ntpd[2083]: Listen normally on 6 cilium_host 192.168.0.161:123 Apr 17 23:34:54.254071 ntpd[2083]: 17 Apr 23:34:54 ntpd[2083]: Listen normally on 7 cilium_net [fe80::e4cc:36ff:fefc:ee69%4]:123 Apr 17 23:34:54.254071 ntpd[2083]: 17 Apr 23:34:54 ntpd[2083]: Listen normally on 8 cilium_host [fe80::a4bd:9aff:fe7c:a5cf%5]:123 Apr 17 23:34:54.254071 ntpd[2083]: 17 Apr 23:34:54 ntpd[2083]: Listen normally on 9 cilium_vxlan [fe80::90e6:79ff:fe6b:a73e%6]:123 Apr 17 23:34:54.254071 ntpd[2083]: 17 Apr 23:34:54 ntpd[2083]: Listen normally on 10 lxc_health [fe80::6019:8cff:fea8:1a47%8]:123 Apr 17 23:34:54.254071 ntpd[2083]: 17 Apr 23:34:54 ntpd[2083]: Listen normally on 11 lxc327f5aa609e7 [fe80::68d5:7eff:fe84:e45f%10]:123 Apr 17 23:34:54.254071 ntpd[2083]: 17 Apr 23:34:54 ntpd[2083]: Listen normally on 12 lxc58ff9a2fa3cd [fe80::48fa:91ff:fe15:6fae%12]:123 Apr 17 23:34:54.250447 ntpd[2083]: Listen normally on 7 cilium_net [fe80::e4cc:36ff:fefc:ee69%4]:123 Apr 17 23:34:54.250528 ntpd[2083]: Listen normally on 8 cilium_host [fe80::a4bd:9aff:fe7c:a5cf%5]:123 Apr 17 23:34:54.250596 ntpd[2083]: Listen normally on 9 cilium_vxlan [fe80::90e6:79ff:fe6b:a73e%6]:123 Apr 17 23:34:54.250663 ntpd[2083]: Listen normally on 10 lxc_health [fe80::6019:8cff:fea8:1a47%8]:123 Apr 17 23:34:54.250730 ntpd[2083]: Listen normally on 11 lxc327f5aa609e7 [fe80::68d5:7eff:fe84:e45f%10]:123 Apr 17 23:34:54.250799 ntpd[2083]: Listen normally on 12 lxc58ff9a2fa3cd [fe80::48fa:91ff:fe15:6fae%12]:123 Apr 17 23:34:55.568748 kubelet[3597]: I0417 23:34:55.567316 3597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:34:57.844834 containerd[2136]: time="2026-04-17T23:34:57.843481624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:57.844834 containerd[2136]: time="2026-04-17T23:34:57.843583204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:57.844834 containerd[2136]: time="2026-04-17T23:34:57.843619612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:57.844834 containerd[2136]: time="2026-04-17T23:34:57.843796708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:57.866802 containerd[2136]: time="2026-04-17T23:34:57.865496068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:57.866802 containerd[2136]: time="2026-04-17T23:34:57.865595572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:57.866802 containerd[2136]: time="2026-04-17T23:34:57.865624048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:57.866802 containerd[2136]: time="2026-04-17T23:34:57.865802608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:58.063461 containerd[2136]: time="2026-04-17T23:34:58.063299809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gnx9h,Uid:1202caf1-d6c5-469f-923b-d7164f35a095,Namespace:kube-system,Attempt:0,} returns sandbox id \"1223234199e9ecf2682ef3f9be59fd0d801d8e7a2e6590ef87b0888ac866aff5\"" Apr 17 23:34:58.074620 containerd[2136]: time="2026-04-17T23:34:58.073825585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txdp9,Uid:9b935498-100d-4c23-9b35-990e795f2a44,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c27ac535bb0c0457e032f9dbe2ad61bbedff8178147747adcadcd76872b478e\"" Apr 17 23:34:58.087525 containerd[2136]: time="2026-04-17T23:34:58.087459661Z" level=info msg="CreateContainer within sandbox \"1223234199e9ecf2682ef3f9be59fd0d801d8e7a2e6590ef87b0888ac866aff5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:34:58.095848 containerd[2136]: time="2026-04-17T23:34:58.095520277Z" level=info msg="CreateContainer within sandbox \"9c27ac535bb0c0457e032f9dbe2ad61bbedff8178147747adcadcd76872b478e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:34:58.144565 containerd[2136]: time="2026-04-17T23:34:58.144302353Z" level=info msg="CreateContainer within sandbox \"1223234199e9ecf2682ef3f9be59fd0d801d8e7a2e6590ef87b0888ac866aff5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bde0f47e96bf5d8240d6f356649f65393428c88cf2fe852ddcbfd6ba20a1c58c\"" Apr 17 23:34:58.147448 containerd[2136]: time="2026-04-17T23:34:58.146619253Z" level=info msg="StartContainer for \"bde0f47e96bf5d8240d6f356649f65393428c88cf2fe852ddcbfd6ba20a1c58c\"" Apr 17 23:34:58.150078 containerd[2136]: time="2026-04-17T23:34:58.147950833Z" level=info msg="CreateContainer within sandbox \"9c27ac535bb0c0457e032f9dbe2ad61bbedff8178147747adcadcd76872b478e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84169a6fca5667d5937cc302ed48d5c3c1bdb1c2bbe37cbf899cf269aaecb21c\"" Apr 17 23:34:58.151111 containerd[2136]: time="2026-04-17T23:34:58.151039969Z" level=info msg="StartContainer for \"84169a6fca5667d5937cc302ed48d5c3c1bdb1c2bbe37cbf899cf269aaecb21c\"" Apr 17 23:34:58.331437 containerd[2136]: time="2026-04-17T23:34:58.327653126Z" level=info msg="StartContainer for \"bde0f47e96bf5d8240d6f356649f65393428c88cf2fe852ddcbfd6ba20a1c58c\" returns successfully" Apr 17 23:34:58.351093 containerd[2136]: time="2026-04-17T23:34:58.350536754Z" level=info msg="StartContainer for \"84169a6fca5667d5937cc302ed48d5c3c1bdb1c2bbe37cbf899cf269aaecb21c\" returns successfully" Apr 17 23:34:58.972567 kubelet[3597]: I0417 23:34:58.970566 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-txdp9" podStartSLOduration=28.97054209 podStartE2EDuration="28.97054209s" podCreationTimestamp="2026-04-17 23:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:58.967826526 +0000 UTC m=+34.909959787" watchObservedRunningTime="2026-04-17 23:34:58.97054209 +0000 UTC m=+34.912675315" Apr 17 23:35:13.605407 systemd[1]: Started sshd@7-172.31.22.159:22-4.175.71.9:48788.service - OpenSSH per-connection server daemon (4.175.71.9:48788). Apr 17 23:35:14.649955 sshd[4969]: Accepted publickey for core from 4.175.71.9 port 48788 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:14.653415 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:14.662041 systemd-logind[2097]: New session 8 of user core. Apr 17 23:35:14.677473 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:35:15.500783 sshd[4969]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:15.509003 systemd-logind[2097]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:35:15.510034 systemd[1]: sshd@7-172.31.22.159:22-4.175.71.9:48788.service: Deactivated successfully. Apr 17 23:35:15.516095 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:35:15.518788 systemd-logind[2097]: Removed session 8. Apr 17 23:35:20.683339 systemd[1]: Started sshd@8-172.31.22.159:22-4.175.71.9:45346.service - OpenSSH per-connection server daemon (4.175.71.9:45346). Apr 17 23:35:21.718801 sshd[4985]: Accepted publickey for core from 4.175.71.9 port 45346 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:21.721674 sshd[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:21.729478 systemd-logind[2097]: New session 9 of user core. Apr 17 23:35:21.743423 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:35:22.561450 sshd[4985]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:22.569812 systemd[1]: sshd@8-172.31.22.159:22-4.175.71.9:45346.service: Deactivated successfully. Apr 17 23:35:22.571061 systemd-logind[2097]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:35:22.577204 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:35:22.580825 systemd-logind[2097]: Removed session 9. Apr 17 23:35:27.747935 systemd[1]: Started sshd@9-172.31.22.159:22-4.175.71.9:37752.service - OpenSSH per-connection server daemon (4.175.71.9:37752). Apr 17 23:35:28.793657 sshd[5002]: Accepted publickey for core from 4.175.71.9 port 37752 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:28.795522 sshd[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:28.805001 systemd-logind[2097]: New session 10 of user core. Apr 17 23:35:28.812165 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:35:29.619210 sshd[5002]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:29.629110 systemd[1]: sshd@9-172.31.22.159:22-4.175.71.9:37752.service: Deactivated successfully. Apr 17 23:35:29.629675 systemd-logind[2097]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:35:29.635861 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:35:29.641540 systemd-logind[2097]: Removed session 10. Apr 17 23:35:34.794415 systemd[1]: Started sshd@10-172.31.22.159:22-4.175.71.9:37760.service - OpenSSH per-connection server daemon (4.175.71.9:37760). Apr 17 23:35:35.845516 sshd[5018]: Accepted publickey for core from 4.175.71.9 port 37760 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:35.848376 sshd[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:35.857157 systemd-logind[2097]: New session 11 of user core. Apr 17 23:35:35.865384 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:35:36.676243 sshd[5018]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:36.682514 systemd[1]: sshd@10-172.31.22.159:22-4.175.71.9:37760.service: Deactivated successfully. Apr 17 23:35:36.690968 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:35:36.693540 systemd-logind[2097]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:35:36.695664 systemd-logind[2097]: Removed session 11. Apr 17 23:35:36.851411 systemd[1]: Started sshd@11-172.31.22.159:22-4.175.71.9:34852.service - OpenSSH per-connection server daemon (4.175.71.9:34852). Apr 17 23:35:37.895704 sshd[5033]: Accepted publickey for core from 4.175.71.9 port 34852 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:37.899077 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:37.910941 systemd-logind[2097]: New session 12 of user core. Apr 17 23:35:37.915512 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:35:38.818120 sshd[5033]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:38.828765 systemd[1]: sshd@11-172.31.22.159:22-4.175.71.9:34852.service: Deactivated successfully. Apr 17 23:35:38.838513 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:35:38.840435 systemd-logind[2097]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:35:38.843314 systemd-logind[2097]: Removed session 12. Apr 17 23:35:38.991735 systemd[1]: Started sshd@12-172.31.22.159:22-4.175.71.9:34868.service - OpenSSH per-connection server daemon (4.175.71.9:34868). Apr 17 23:35:40.038768 sshd[5045]: Accepted publickey for core from 4.175.71.9 port 34868 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:40.040597 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:40.051221 systemd-logind[2097]: New session 13 of user core. Apr 17 23:35:40.058993 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:35:40.858562 sshd[5045]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:40.867039 systemd-logind[2097]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:35:40.868520 systemd[1]: sshd@12-172.31.22.159:22-4.175.71.9:34868.service: Deactivated successfully. Apr 17 23:35:40.877415 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:35:40.879843 systemd-logind[2097]: Removed session 13. Apr 17 23:35:46.033368 systemd[1]: Started sshd@13-172.31.22.159:22-4.175.71.9:49296.service - OpenSSH per-connection server daemon (4.175.71.9:49296). Apr 17 23:35:47.077842 sshd[5062]: Accepted publickey for core from 4.175.71.9 port 49296 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:47.081476 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:47.090268 systemd-logind[2097]: New session 14 of user core. Apr 17 23:35:47.100439 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:35:47.902952 sshd[5062]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:47.909277 systemd[1]: sshd@13-172.31.22.159:22-4.175.71.9:49296.service: Deactivated successfully. Apr 17 23:35:47.917826 systemd-logind[2097]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:35:47.918242 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:35:47.922376 systemd-logind[2097]: Removed session 14. Apr 17 23:35:53.079377 systemd[1]: Started sshd@14-172.31.22.159:22-4.175.71.9:49304.service - OpenSSH per-connection server daemon (4.175.71.9:49304). Apr 17 23:35:54.120968 sshd[5076]: Accepted publickey for core from 4.175.71.9 port 49304 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:54.124346 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:54.137182 systemd-logind[2097]: New session 15 of user core. Apr 17 23:35:54.145590 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:35:54.947212 sshd[5076]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:54.955459 systemd[1]: sshd@14-172.31.22.159:22-4.175.71.9:49304.service: Deactivated successfully. Apr 17 23:35:54.961464 systemd-logind[2097]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:35:54.962502 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:35:54.966019 systemd-logind[2097]: Removed session 15. Apr 17 23:35:55.122407 systemd[1]: Started sshd@15-172.31.22.159:22-4.175.71.9:49308.service - OpenSSH per-connection server daemon (4.175.71.9:49308). Apr 17 23:35:56.166778 sshd[5090]: Accepted publickey for core from 4.175.71.9 port 49308 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:56.168842 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:56.177982 systemd-logind[2097]: New session 16 of user core. Apr 17 23:35:56.185707 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:35:57.073544 sshd[5090]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:57.080191 systemd[1]: sshd@15-172.31.22.159:22-4.175.71.9:49308.service: Deactivated successfully. Apr 17 23:35:57.080740 systemd-logind[2097]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:35:57.088004 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:35:57.093110 systemd-logind[2097]: Removed session 16. Apr 17 23:35:57.248423 systemd[1]: Started sshd@16-172.31.22.159:22-4.175.71.9:46110.service - OpenSSH per-connection server daemon (4.175.71.9:46110). Apr 17 23:35:58.284682 sshd[5102]: Accepted publickey for core from 4.175.71.9 port 46110 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:35:58.287920 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:35:58.296657 systemd-logind[2097]: New session 17 of user core. Apr 17 23:35:58.304576 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:35:59.808828 sshd[5102]: pam_unix(sshd:session): session closed for user core Apr 17 23:35:59.815316 systemd[1]: sshd@16-172.31.22.159:22-4.175.71.9:46110.service: Deactivated successfully. Apr 17 23:35:59.823987 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:35:59.826462 systemd-logind[2097]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:35:59.829958 systemd-logind[2097]: Removed session 17. Apr 17 23:35:59.982604 systemd[1]: Started sshd@17-172.31.22.159:22-4.175.71.9:46120.service - OpenSSH per-connection server daemon (4.175.71.9:46120). Apr 17 23:36:01.031921 sshd[5122]: Accepted publickey for core from 4.175.71.9 port 46120 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:01.033816 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:01.042580 systemd-logind[2097]: New session 18 of user core. Apr 17 23:36:01.047402 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:36:02.090542 sshd[5122]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:02.099984 systemd-logind[2097]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:36:02.100116 systemd[1]: sshd@17-172.31.22.159:22-4.175.71.9:46120.service: Deactivated successfully. Apr 17 23:36:02.105639 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:36:02.108446 systemd-logind[2097]: Removed session 18. Apr 17 23:36:02.268432 systemd[1]: Started sshd@18-172.31.22.159:22-4.175.71.9:46124.service - OpenSSH per-connection server daemon (4.175.71.9:46124). Apr 17 23:36:03.316420 sshd[5136]: Accepted publickey for core from 4.175.71.9 port 46124 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:03.318697 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:03.326949 systemd-logind[2097]: New session 19 of user core. Apr 17 23:36:03.331746 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:36:04.143221 sshd[5136]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:04.150737 systemd[1]: sshd@18-172.31.22.159:22-4.175.71.9:46124.service: Deactivated successfully. Apr 17 23:36:04.158405 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:36:04.160293 systemd-logind[2097]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:36:04.162385 systemd-logind[2097]: Removed session 19. Apr 17 23:36:09.318378 systemd[1]: Started sshd@19-172.31.22.159:22-4.175.71.9:48886.service - OpenSSH per-connection server daemon (4.175.71.9:48886). Apr 17 23:36:10.362765 sshd[5151]: Accepted publickey for core from 4.175.71.9 port 48886 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:10.365546 sshd[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:10.372830 systemd-logind[2097]: New session 20 of user core. Apr 17 23:36:10.380380 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:36:11.187174 sshd[5151]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:11.195083 systemd[1]: sshd@19-172.31.22.159:22-4.175.71.9:48886.service: Deactivated successfully. Apr 17 23:36:11.202091 systemd-logind[2097]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:36:11.202926 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:36:11.206224 systemd-logind[2097]: Removed session 20. Apr 17 23:36:16.362840 systemd[1]: Started sshd@20-172.31.22.159:22-4.175.71.9:37102.service - OpenSSH per-connection server daemon (4.175.71.9:37102). Apr 17 23:36:17.404459 sshd[5165]: Accepted publickey for core from 4.175.71.9 port 37102 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:17.407143 sshd[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:17.414844 systemd-logind[2097]: New session 21 of user core. Apr 17 23:36:17.428530 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:36:18.224814 sshd[5165]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:18.234288 systemd[1]: sshd@20-172.31.22.159:22-4.175.71.9:37102.service: Deactivated successfully. Apr 17 23:36:18.239658 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:36:18.241952 systemd-logind[2097]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:36:18.244078 systemd-logind[2097]: Removed session 21. Apr 17 23:36:18.399390 systemd[1]: Started sshd@21-172.31.22.159:22-4.175.71.9:37114.service - OpenSSH per-connection server daemon (4.175.71.9:37114). Apr 17 23:36:19.448141 sshd[5179]: Accepted publickey for core from 4.175.71.9 port 37114 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:19.450774 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:19.460460 systemd-logind[2097]: New session 22 of user core. Apr 17 23:36:19.467413 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:36:22.108487 kubelet[3597]: I0417 23:36:22.103658 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gnx9h" podStartSLOduration=112.10363801 podStartE2EDuration="1m52.10363801s" podCreationTimestamp="2026-04-17 23:34:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:59.02929373 +0000 UTC m=+34.971426967" watchObservedRunningTime="2026-04-17 23:36:22.10363801 +0000 UTC m=+118.045771235" Apr 17 23:36:22.127749 containerd[2136]: time="2026-04-17T23:36:22.127379255Z" level=info msg="StopContainer for \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\" with timeout 30 (s)" Apr 17 23:36:22.130727 containerd[2136]: time="2026-04-17T23:36:22.130112327Z" level=info msg="Stop container \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\" with signal terminated" Apr 17 23:36:22.188941 containerd[2136]: time="2026-04-17T23:36:22.188336435Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:36:22.204351 containerd[2136]: time="2026-04-17T23:36:22.204125675Z" level=info msg="StopContainer for \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\" with timeout 2 (s)" Apr 17 23:36:22.204972 containerd[2136]: time="2026-04-17T23:36:22.204803663Z" level=info msg="Stop container \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\" with signal terminated" Apr 17 23:36:22.223298 systemd-networkd[1686]: lxc_health: Link DOWN Apr 17 23:36:22.223318 systemd-networkd[1686]: lxc_health: Lost carrier Apr 17 23:36:22.265654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e-rootfs.mount: Deactivated successfully. Apr 17 23:36:22.278806 containerd[2136]: time="2026-04-17T23:36:22.278213543Z" level=info msg="shim disconnected" id=98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e namespace=k8s.io Apr 17 23:36:22.278806 containerd[2136]: time="2026-04-17T23:36:22.278490179Z" level=warning msg="cleaning up after shim disconnected" id=98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e namespace=k8s.io Apr 17 23:36:22.278806 containerd[2136]: time="2026-04-17T23:36:22.278519639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:22.316482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b-rootfs.mount: Deactivated successfully. Apr 17 23:36:22.321214 containerd[2136]: time="2026-04-17T23:36:22.321003048Z" level=info msg="shim disconnected" id=75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b namespace=k8s.io Apr 17 23:36:22.321399 containerd[2136]: time="2026-04-17T23:36:22.321212244Z" level=warning msg="cleaning up after shim disconnected" id=75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b namespace=k8s.io Apr 17 23:36:22.321399 containerd[2136]: time="2026-04-17T23:36:22.321234744Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:22.327209 containerd[2136]: time="2026-04-17T23:36:22.327026652Z" level=info msg="StopContainer for \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\" returns successfully" Apr 17 23:36:22.330900 containerd[2136]: time="2026-04-17T23:36:22.328430736Z" level=info msg="StopPodSandbox for \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\"" Apr 17 23:36:22.330900 containerd[2136]: time="2026-04-17T23:36:22.328511760Z" level=info msg="Container to stop \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:36:22.333443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37-shm.mount: Deactivated successfully. Apr 17 23:36:22.379765 containerd[2136]: time="2026-04-17T23:36:22.379600812Z" level=info msg="StopContainer for \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\" returns successfully" Apr 17 23:36:22.382004 containerd[2136]: time="2026-04-17T23:36:22.381461580Z" level=info msg="StopPodSandbox for \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\"" Apr 17 23:36:22.382004 containerd[2136]: time="2026-04-17T23:36:22.381526068Z" level=info msg="Container to stop \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:36:22.382004 containerd[2136]: time="2026-04-17T23:36:22.381551352Z" level=info msg="Container to stop \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:36:22.382004 containerd[2136]: time="2026-04-17T23:36:22.381573972Z" level=info msg="Container to stop \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:36:22.382004 containerd[2136]: time="2026-04-17T23:36:22.381597480Z" level=info msg="Container to stop \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:36:22.382004 containerd[2136]: time="2026-04-17T23:36:22.381618816Z" level=info msg="Container to stop \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:36:22.389471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182-shm.mount: Deactivated successfully. Apr 17 23:36:22.444761 containerd[2136]: time="2026-04-17T23:36:22.444663576Z" level=info msg="shim disconnected" id=fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37 namespace=k8s.io Apr 17 23:36:22.444761 containerd[2136]: time="2026-04-17T23:36:22.444754932Z" level=warning msg="cleaning up after shim disconnected" id=fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37 namespace=k8s.io Apr 17 23:36:22.444761 containerd[2136]: time="2026-04-17T23:36:22.444777000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:22.469029 containerd[2136]: time="2026-04-17T23:36:22.468954888Z" level=info msg="shim disconnected" id=8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182 namespace=k8s.io Apr 17 23:36:22.469901 containerd[2136]: time="2026-04-17T23:36:22.469727760Z" level=warning msg="cleaning up after shim disconnected" id=8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182 namespace=k8s.io Apr 17 23:36:22.470117 containerd[2136]: time="2026-04-17T23:36:22.469767348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:22.478656 containerd[2136]: time="2026-04-17T23:36:22.478590156Z" level=info msg="TearDown network for sandbox \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\" successfully" Apr 17 23:36:22.478656 containerd[2136]: time="2026-04-17T23:36:22.478644768Z" level=info msg="StopPodSandbox for \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\" returns successfully" Apr 17 23:36:22.503551 containerd[2136]: time="2026-04-17T23:36:22.503481060Z" level=info msg="TearDown network for sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" successfully" Apr 17 23:36:22.503551 containerd[2136]: time="2026-04-17T23:36:22.503537664Z" level=info msg="StopPodSandbox for \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" returns successfully" Apr 17 23:36:22.573963 kubelet[3597]: I0417 23:36:22.572216 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-etc-cni-netd\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.573963 kubelet[3597]: I0417 23:36:22.572280 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cni-path\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.573963 kubelet[3597]: I0417 23:36:22.572358 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77421517-915e-4faa-98c4-1ef7a0fff6fb-cilium-config-path\") pod \"77421517-915e-4faa-98c4-1ef7a0fff6fb\" (UID: \"77421517-915e-4faa-98c4-1ef7a0fff6fb\") " Apr 17 23:36:22.573963 kubelet[3597]: I0417 23:36:22.572399 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-host-proc-sys-net\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.573963 kubelet[3597]: I0417 23:36:22.572433 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-run\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.573963 kubelet[3597]: I0417 23:36:22.572468 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cff77200-3dee-474d-9e4f-bc525ef22bad-clustermesh-secrets\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574468 kubelet[3597]: I0417 23:36:22.572503 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-bpf-maps\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574468 kubelet[3597]: I0417 23:36:22.572539 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-hostproc\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574468 kubelet[3597]: I0417 23:36:22.572574 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jxjl\" (UniqueName: \"kubernetes.io/projected/cff77200-3dee-474d-9e4f-bc525ef22bad-kube-api-access-5jxjl\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574468 kubelet[3597]: I0417 23:36:22.572605 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-lib-modules\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574468 kubelet[3597]: I0417 23:36:22.572636 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-host-proc-sys-kernel\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574468 kubelet[3597]: I0417 23:36:22.572677 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm6gx\" (UniqueName: \"kubernetes.io/projected/77421517-915e-4faa-98c4-1ef7a0fff6fb-kube-api-access-mm6gx\") pod \"77421517-915e-4faa-98c4-1ef7a0fff6fb\" (UID: \"77421517-915e-4faa-98c4-1ef7a0fff6fb\") " Apr 17 23:36:22.574799 kubelet[3597]: I0417 23:36:22.572718 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-config-path\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574799 kubelet[3597]: I0417 23:36:22.572754 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cff77200-3dee-474d-9e4f-bc525ef22bad-hubble-tls\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574799 kubelet[3597]: I0417 23:36:22.572794 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-xtables-lock\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574799 kubelet[3597]: I0417 23:36:22.572827 3597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-cgroup\") pod \"cff77200-3dee-474d-9e4f-bc525ef22bad\" (UID: \"cff77200-3dee-474d-9e4f-bc525ef22bad\") " Apr 17 23:36:22.574799 kubelet[3597]: I0417 23:36:22.573007 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.575129 kubelet[3597]: I0417 23:36:22.573067 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.575129 kubelet[3597]: I0417 23:36:22.573106 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cni-path" (OuterVolumeSpecName: "cni-path") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.577145 kubelet[3597]: I0417 23:36:22.577076 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.577311 kubelet[3597]: I0417 23:36:22.577164 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.577755 kubelet[3597]: I0417 23:36:22.577704 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.578377 kubelet[3597]: I0417 23:36:22.577930 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.580084 kubelet[3597]: I0417 23:36:22.580011 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.580255 kubelet[3597]: I0417 23:36:22.580090 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-hostproc" (OuterVolumeSpecName: "hostproc") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.583826 kubelet[3597]: I0417 23:36:22.583751 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff77200-3dee-474d-9e4f-bc525ef22bad-kube-api-access-5jxjl" (OuterVolumeSpecName: "kube-api-access-5jxjl") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "kube-api-access-5jxjl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:36:22.584753 kubelet[3597]: I0417 23:36:22.584628 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:36:22.589069 kubelet[3597]: I0417 23:36:22.588466 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff77200-3dee-474d-9e4f-bc525ef22bad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:36:22.591706 kubelet[3597]: I0417 23:36:22.591625 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77421517-915e-4faa-98c4-1ef7a0fff6fb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "77421517-915e-4faa-98c4-1ef7a0fff6fb" (UID: "77421517-915e-4faa-98c4-1ef7a0fff6fb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:36:22.592070 kubelet[3597]: I0417 23:36:22.591862 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77421517-915e-4faa-98c4-1ef7a0fff6fb-kube-api-access-mm6gx" (OuterVolumeSpecName: "kube-api-access-mm6gx") pod "77421517-915e-4faa-98c4-1ef7a0fff6fb" (UID: "77421517-915e-4faa-98c4-1ef7a0fff6fb"). InnerVolumeSpecName "kube-api-access-mm6gx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:36:22.593077 kubelet[3597]: I0417 23:36:22.593035 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff77200-3dee-474d-9e4f-bc525ef22bad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:36:22.593732 kubelet[3597]: I0417 23:36:22.593681 3597 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cff77200-3dee-474d-9e4f-bc525ef22bad" (UID: "cff77200-3dee-474d-9e4f-bc525ef22bad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:36:22.673647 kubelet[3597]: I0417 23:36:22.673494 3597 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-xtables-lock\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.673647 kubelet[3597]: I0417 23:36:22.673545 3597 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-cgroup\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.673647 kubelet[3597]: I0417 23:36:22.673573 3597 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-etc-cni-netd\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.673647 kubelet[3597]: I0417 23:36:22.673593 3597 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cni-path\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.673647 kubelet[3597]: I0417 23:36:22.673618 3597 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77421517-915e-4faa-98c4-1ef7a0fff6fb-cilium-config-path\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674036 kubelet[3597]: I0417 23:36:22.673654 3597 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-host-proc-sys-net\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674036 kubelet[3597]: I0417 23:36:22.673677 3597 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-run\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674036 kubelet[3597]: I0417 23:36:22.673702 3597 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cff77200-3dee-474d-9e4f-bc525ef22bad-clustermesh-secrets\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674036 kubelet[3597]: I0417 23:36:22.673724 3597 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-bpf-maps\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674036 kubelet[3597]: I0417 23:36:22.673744 3597 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-hostproc\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674036 kubelet[3597]: I0417 23:36:22.673769 3597 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5jxjl\" (UniqueName: \"kubernetes.io/projected/cff77200-3dee-474d-9e4f-bc525ef22bad-kube-api-access-5jxjl\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674036 kubelet[3597]: I0417 23:36:22.673790 3597 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-lib-modules\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674036 kubelet[3597]: I0417 23:36:22.673811 3597 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cff77200-3dee-474d-9e4f-bc525ef22bad-host-proc-sys-kernel\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674429 kubelet[3597]: I0417 23:36:22.673832 3597 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mm6gx\" (UniqueName: \"kubernetes.io/projected/77421517-915e-4faa-98c4-1ef7a0fff6fb-kube-api-access-mm6gx\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674429 kubelet[3597]: I0417 23:36:22.673853 3597 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cff77200-3dee-474d-9e4f-bc525ef22bad-cilium-config-path\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:22.674429 kubelet[3597]: I0417 23:36:22.673907 3597 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cff77200-3dee-474d-9e4f-bc525ef22bad-hubble-tls\") on node \"ip-172-31-22-159\" DevicePath \"\"" Apr 17 23:36:23.159195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37-rootfs.mount: Deactivated successfully. Apr 17 23:36:23.159490 systemd[1]: var-lib-kubelet-pods-77421517\x2d915e\x2d4faa\x2d98c4\x2d1ef7a0fff6fb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmm6gx.mount: Deactivated successfully. Apr 17 23:36:23.159743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182-rootfs.mount: Deactivated successfully. Apr 17 23:36:23.160005 systemd[1]: var-lib-kubelet-pods-cff77200\x2d3dee\x2d474d\x2d9e4f\x2dbc525ef22bad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5jxjl.mount: Deactivated successfully. Apr 17 23:36:23.160271 systemd[1]: var-lib-kubelet-pods-cff77200\x2d3dee\x2d474d\x2d9e4f\x2dbc525ef22bad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 23:36:23.160507 systemd[1]: var-lib-kubelet-pods-cff77200\x2d3dee\x2d474d\x2d9e4f\x2dbc525ef22bad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 23:36:23.191481 kubelet[3597]: I0417 23:36:23.190401 3597 scope.go:117] "RemoveContainer" containerID="75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b" Apr 17 23:36:23.199994 containerd[2136]: time="2026-04-17T23:36:23.199538400Z" level=info msg="RemoveContainer for \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\"" Apr 17 23:36:23.215020 containerd[2136]: time="2026-04-17T23:36:23.214804572Z" level=info msg="RemoveContainer for \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\" returns successfully" Apr 17 23:36:23.218554 kubelet[3597]: I0417 23:36:23.218491 3597 scope.go:117] "RemoveContainer" containerID="9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720" Apr 17 23:36:23.255091 containerd[2136]: time="2026-04-17T23:36:23.254654244Z" level=info msg="RemoveContainer for \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\"" Apr 17 23:36:23.263694 containerd[2136]: time="2026-04-17T23:36:23.263455152Z" level=info msg="RemoveContainer for \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\" returns successfully" Apr 17 23:36:23.264059 kubelet[3597]: I0417 23:36:23.263952 3597 scope.go:117] "RemoveContainer" containerID="f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965" Apr 17 23:36:23.269410 containerd[2136]: time="2026-04-17T23:36:23.267252636Z" level=info msg="RemoveContainer for \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\"" Apr 17 23:36:23.278183 containerd[2136]: time="2026-04-17T23:36:23.278130912Z" level=info msg="RemoveContainer for \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\" returns successfully" Apr 17 23:36:23.278989 kubelet[3597]: I0417 23:36:23.278905 3597 scope.go:117] "RemoveContainer" containerID="c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a" Apr 17 23:36:23.281574 containerd[2136]: time="2026-04-17T23:36:23.281189712Z" level=info msg="RemoveContainer for \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\"" Apr 17 23:36:23.290539 containerd[2136]: time="2026-04-17T23:36:23.290413272Z" level=info msg="RemoveContainer for \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\" returns successfully" Apr 17 23:36:23.291415 kubelet[3597]: I0417 23:36:23.291089 3597 scope.go:117] "RemoveContainer" containerID="2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b" Apr 17 23:36:23.293913 containerd[2136]: time="2026-04-17T23:36:23.293566020Z" level=info msg="RemoveContainer for \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\"" Apr 17 23:36:23.300057 containerd[2136]: time="2026-04-17T23:36:23.299987064Z" level=info msg="RemoveContainer for \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\" returns successfully" Apr 17 23:36:23.300683 kubelet[3597]: I0417 23:36:23.300536 3597 scope.go:117] "RemoveContainer" containerID="75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b" Apr 17 23:36:23.301209 containerd[2136]: time="2026-04-17T23:36:23.301079232Z" level=error msg="ContainerStatus for \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\": not found" Apr 17 23:36:23.301366 kubelet[3597]: E0417 23:36:23.301313 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\": not found" containerID="75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b" Apr 17 23:36:23.301536 kubelet[3597]: I0417 23:36:23.301384 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b"} err="failed to get container status \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"75f8f36ff3df943514c34a7e874115a68d96ecb76c9b793b47cf86f22a975f4b\": not found" Apr 17 23:36:23.301536 kubelet[3597]: I0417 23:36:23.301447 3597 scope.go:117] "RemoveContainer" containerID="9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720" Apr 17 23:36:23.302026 containerd[2136]: time="2026-04-17T23:36:23.301746120Z" level=error msg="ContainerStatus for \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\": not found" Apr 17 23:36:23.302137 kubelet[3597]: E0417 23:36:23.302096 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\": not found" containerID="9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720" Apr 17 23:36:23.302207 kubelet[3597]: I0417 23:36:23.302169 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720"} err="failed to get container status \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\": rpc error: code = NotFound desc = an error occurred when try to find container \"9161555ee1d680008c0681323c30fe820acf6024d250fe07321d665695254720\": not found" Apr 17 23:36:23.302283 kubelet[3597]: I0417 23:36:23.302210 3597 scope.go:117] "RemoveContainer" containerID="f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965" Apr 17 23:36:23.302647 containerd[2136]: time="2026-04-17T23:36:23.302565804Z" level=error msg="ContainerStatus for \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\": not found" Apr 17 23:36:23.302829 kubelet[3597]: E0417 23:36:23.302789 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\": not found" containerID="f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965" Apr 17 23:36:23.302929 kubelet[3597]: I0417 23:36:23.302843 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965"} err="failed to get container status \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7f7acb17e8a6d3d41668e315a33c51777b2dec64dbfe8cff664d5ecde216965\": not found" Apr 17 23:36:23.303010 kubelet[3597]: I0417 23:36:23.302927 3597 scope.go:117] "RemoveContainer" containerID="c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a" Apr 17 23:36:23.303421 containerd[2136]: time="2026-04-17T23:36:23.303269040Z" level=error msg="ContainerStatus for \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\": not found" Apr 17 23:36:23.303506 kubelet[3597]: E0417 23:36:23.303454 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\": not found" containerID="c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a" Apr 17 23:36:23.303563 kubelet[3597]: I0417 23:36:23.303492 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a"} err="failed to get container status \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c849809fa3f07a32cf991e4f2c97f94d0fa5e1f234052f5f2405435e819e4a8a\": not found" Apr 17 23:36:23.303563 kubelet[3597]: I0417 23:36:23.303523 3597 scope.go:117] "RemoveContainer" containerID="2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b" Apr 17 23:36:23.304382 containerd[2136]: time="2026-04-17T23:36:23.303922872Z" level=error msg="ContainerStatus for \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\": not found" Apr 17 23:36:23.304511 kubelet[3597]: E0417 23:36:23.304173 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\": not found" containerID="2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b" Apr 17 23:36:23.304511 kubelet[3597]: I0417 23:36:23.304216 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b"} err="failed to get container status \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2adcf23ed8070c5466c3e559403851bf29b80148539ebe5b38cb72b138b8145b\": not found" Apr 17 23:36:23.304511 kubelet[3597]: I0417 23:36:23.304245 3597 scope.go:117] "RemoveContainer" containerID="98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e" Apr 17 23:36:23.306162 containerd[2136]: time="2026-04-17T23:36:23.306120768Z" level=info msg="RemoveContainer for \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\"" Apr 17 23:36:23.312747 containerd[2136]: time="2026-04-17T23:36:23.312669552Z" level=info msg="RemoveContainer for \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\" returns successfully" Apr 17 23:36:23.313216 kubelet[3597]: I0417 23:36:23.313068 3597 scope.go:117] "RemoveContainer" containerID="98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e" Apr 17 23:36:23.313738 containerd[2136]: time="2026-04-17T23:36:23.313668396Z" level=error msg="ContainerStatus for \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\": not found" Apr 17 23:36:23.314019 kubelet[3597]: E0417 23:36:23.313934 3597 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\": not found" containerID="98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e" Apr 17 23:36:23.314019 kubelet[3597]: I0417 23:36:23.313982 3597 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e"} err="failed to get container status \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\": rpc error: code = NotFound desc = an error occurred when try to find container \"98d4d0e5ffa86a7ac4fc748ee5f39f976304f8b4bde279e602b68cb762de333e\": not found" Apr 17 23:36:24.201410 sshd[5179]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:24.210974 systemd[1]: sshd@21-172.31.22.159:22-4.175.71.9:37114.service: Deactivated successfully. Apr 17 23:36:24.219685 systemd-logind[2097]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:36:24.220580 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:36:24.227184 systemd-logind[2097]: Removed session 22. Apr 17 23:36:24.250167 ntpd[2083]: Deleting interface #10 lxc_health, fe80::6019:8cff:fea8:1a47%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Apr 17 23:36:24.250747 ntpd[2083]: 17 Apr 23:36:24 ntpd[2083]: Deleting interface #10 lxc_health, fe80::6019:8cff:fea8:1a47%8#123, interface stats: received=0, sent=0, dropped=0, active_time=90 secs Apr 17 23:36:24.377442 systemd[1]: Started sshd@22-172.31.22.159:22-4.175.71.9:37122.service - OpenSSH per-connection server daemon (4.175.71.9:37122). Apr 17 23:36:24.381453 containerd[2136]: time="2026-04-17T23:36:24.380796650Z" level=info msg="StopPodSandbox for \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\"" Apr 17 23:36:24.381453 containerd[2136]: time="2026-04-17T23:36:24.380977154Z" level=info msg="TearDown network for sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" successfully" Apr 17 23:36:24.381453 containerd[2136]: time="2026-04-17T23:36:24.381003062Z" level=info msg="StopPodSandbox for \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" returns successfully" Apr 17 23:36:24.383494 containerd[2136]: time="2026-04-17T23:36:24.382629182Z" level=info msg="RemovePodSandbox for \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\"" Apr 17 23:36:24.383494 containerd[2136]: time="2026-04-17T23:36:24.382690286Z" level=info msg="Forcibly stopping sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\"" Apr 17 23:36:24.383494 containerd[2136]: time="2026-04-17T23:36:24.382791014Z" level=info msg="TearDown network for sandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" successfully" Apr 17 23:36:24.390932 containerd[2136]: time="2026-04-17T23:36:24.390178922Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:36:24.390932 containerd[2136]: time="2026-04-17T23:36:24.390259082Z" level=info msg="RemovePodSandbox \"8d102b4e5723e94ca952b0050354e209fc2211e26fbfbcccd8b21691635a4182\" returns successfully" Apr 17 23:36:24.391153 containerd[2136]: time="2026-04-17T23:36:24.391082558Z" level=info msg="StopPodSandbox for \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\"" Apr 17 23:36:24.391231 containerd[2136]: time="2026-04-17T23:36:24.391198670Z" level=info msg="TearDown network for sandbox \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\" successfully" Apr 17 23:36:24.391231 containerd[2136]: time="2026-04-17T23:36:24.391221374Z" level=info msg="StopPodSandbox for \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\" returns successfully" Apr 17 23:36:24.393911 containerd[2136]: time="2026-04-17T23:36:24.392058002Z" level=info msg="RemovePodSandbox for \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\"" Apr 17 23:36:24.393911 containerd[2136]: time="2026-04-17T23:36:24.392130086Z" level=info msg="Forcibly stopping sandbox \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\"" Apr 17 23:36:24.393911 containerd[2136]: time="2026-04-17T23:36:24.392274122Z" level=info msg="TearDown network for sandbox \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\" successfully" Apr 17 23:36:24.399346 containerd[2136]: time="2026-04-17T23:36:24.399273134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:36:24.399515 containerd[2136]: time="2026-04-17T23:36:24.399363038Z" level=info msg="RemovePodSandbox \"fdba2c5fd653ac3dc3985f7f961525321f7394cc63cc960380f847a72342de37\" returns successfully" Apr 17 23:36:24.516189 kubelet[3597]: I0417 23:36:24.516057 3597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77421517-915e-4faa-98c4-1ef7a0fff6fb" path="/var/lib/kubelet/pods/77421517-915e-4faa-98c4-1ef7a0fff6fb/volumes" Apr 17 23:36:24.518993 kubelet[3597]: I0417 23:36:24.518382 3597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff77200-3dee-474d-9e4f-bc525ef22bad" path="/var/lib/kubelet/pods/cff77200-3dee-474d-9e4f-bc525ef22bad/volumes" Apr 17 23:36:24.656318 kubelet[3597]: E0417 23:36:24.656243 3597 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:36:25.424967 sshd[5344]: Accepted publickey for core from 4.175.71.9 port 37122 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:25.427592 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:25.436998 systemd-logind[2097]: New session 23 of user core. Apr 17 23:36:25.445007 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:36:27.244826 kubelet[3597]: I0417 23:36:27.244757 3597 setters.go:618] "Node became not ready" node="ip-172-31-22-159" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-17T23:36:27Z","lastTransitionTime":"2026-04-17T23:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 17 23:36:27.907547 kubelet[3597]: I0417 23:36:27.907443 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-hostproc\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.907728 kubelet[3597]: I0417 23:36:27.907567 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-lib-modules\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.907728 kubelet[3597]: I0417 23:36:27.907653 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-xtables-lock\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.907831 kubelet[3597]: I0417 23:36:27.907723 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-clustermesh-secrets\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.907831 kubelet[3597]: I0417 23:36:27.907763 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-cilium-config-path\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.907831 kubelet[3597]: I0417 23:36:27.907801 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-host-proc-sys-net\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908092 kubelet[3597]: I0417 23:36:27.907836 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-bpf-maps\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908092 kubelet[3597]: I0417 23:36:27.907895 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-cni-path\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908092 kubelet[3597]: I0417 23:36:27.907941 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-etc-cni-netd\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908092 kubelet[3597]: I0417 23:36:27.907982 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-cilium-ipsec-secrets\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908092 kubelet[3597]: I0417 23:36:27.908031 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27fm\" (UniqueName: \"kubernetes.io/projected/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-kube-api-access-f27fm\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908092 kubelet[3597]: I0417 23:36:27.908088 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-cilium-run\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908436 kubelet[3597]: I0417 23:36:27.908143 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-cilium-cgroup\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908436 kubelet[3597]: I0417 23:36:27.908183 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-host-proc-sys-kernel\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.908436 kubelet[3597]: I0417 23:36:27.908219 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7d7b8dd-b4bb-419e-9c47-11709ec31fd0-hubble-tls\") pod \"cilium-86w8t\" (UID: \"f7d7b8dd-b4bb-419e-9c47-11709ec31fd0\") " pod="kube-system/cilium-86w8t" Apr 17 23:36:27.970175 sshd[5344]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:27.976052 systemd[1]: sshd@22-172.31.22.159:22-4.175.71.9:37122.service: Deactivated successfully. Apr 17 23:36:27.977242 systemd-logind[2097]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:36:27.986126 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:36:27.990044 systemd-logind[2097]: Removed session 23. Apr 17 23:36:28.118671 containerd[2136]: time="2026-04-17T23:36:28.118595932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86w8t,Uid:f7d7b8dd-b4bb-419e-9c47-11709ec31fd0,Namespace:kube-system,Attempt:0,}" Apr 17 23:36:28.158416 systemd[1]: Started sshd@23-172.31.22.159:22-4.175.71.9:49772.service - OpenSSH per-connection server daemon (4.175.71.9:49772). Apr 17 23:36:28.172282 containerd[2136]: time="2026-04-17T23:36:28.171318473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:36:28.172282 containerd[2136]: time="2026-04-17T23:36:28.171445601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:36:28.172282 containerd[2136]: time="2026-04-17T23:36:28.171477449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.172282 containerd[2136]: time="2026-04-17T23:36:28.171643277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:36:28.234032 containerd[2136]: time="2026-04-17T23:36:28.233956925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86w8t,Uid:f7d7b8dd-b4bb-419e-9c47-11709ec31fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\"" Apr 17 23:36:28.246434 containerd[2136]: time="2026-04-17T23:36:28.246052481Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:36:28.269121 containerd[2136]: time="2026-04-17T23:36:28.269065637Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"abc04bc5b12dda0ed043a211b90f9ac7292961c79930dd167fa3780b8042a3e2\"" Apr 17 23:36:28.270365 containerd[2136]: time="2026-04-17T23:36:28.270267725Z" level=info msg="StartContainer for \"abc04bc5b12dda0ed043a211b90f9ac7292961c79930dd167fa3780b8042a3e2\"" Apr 17 23:36:28.370778 containerd[2136]: time="2026-04-17T23:36:28.370622526Z" level=info msg="StartContainer for \"abc04bc5b12dda0ed043a211b90f9ac7292961c79930dd167fa3780b8042a3e2\" returns successfully" Apr 17 23:36:28.445418 containerd[2136]: time="2026-04-17T23:36:28.444056118Z" level=info msg="shim disconnected" id=abc04bc5b12dda0ed043a211b90f9ac7292961c79930dd167fa3780b8042a3e2 namespace=k8s.io Apr 17 23:36:28.445418 containerd[2136]: time="2026-04-17T23:36:28.444152142Z" level=warning msg="cleaning up after shim disconnected" id=abc04bc5b12dda0ed043a211b90f9ac7292961c79930dd167fa3780b8042a3e2 namespace=k8s.io Apr 17 23:36:28.445418 containerd[2136]: time="2026-04-17T23:36:28.444173982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:29.208427 sshd[5373]: Accepted publickey for core from 4.175.71.9 port 49772 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:29.211083 sshd[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:29.221633 systemd-logind[2097]: New session 24 of user core. Apr 17 23:36:29.227416 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 17 23:36:29.273068 containerd[2136]: time="2026-04-17T23:36:29.272425026Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:36:29.321931 containerd[2136]: time="2026-04-17T23:36:29.320276082Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a82004ff32eab4623ab5923a225f4224a855242fd27da92b4bb1a5ac43a9d17b\"" Apr 17 23:36:29.343642 containerd[2136]: time="2026-04-17T23:36:29.343558278Z" level=info msg="StartContainer for \"a82004ff32eab4623ab5923a225f4224a855242fd27da92b4bb1a5ac43a9d17b\"" Apr 17 23:36:29.526049 containerd[2136]: time="2026-04-17T23:36:29.525976051Z" level=info msg="StartContainer for \"a82004ff32eab4623ab5923a225f4224a855242fd27da92b4bb1a5ac43a9d17b\" returns successfully" Apr 17 23:36:29.580555 containerd[2136]: time="2026-04-17T23:36:29.580456688Z" level=info msg="shim disconnected" id=a82004ff32eab4623ab5923a225f4224a855242fd27da92b4bb1a5ac43a9d17b namespace=k8s.io Apr 17 23:36:29.580555 containerd[2136]: time="2026-04-17T23:36:29.580549148Z" level=warning msg="cleaning up after shim disconnected" id=a82004ff32eab4623ab5923a225f4224a855242fd27da92b4bb1a5ac43a9d17b namespace=k8s.io Apr 17 23:36:29.581177 containerd[2136]: time="2026-04-17T23:36:29.580572284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:29.658284 kubelet[3597]: E0417 23:36:29.658152 3597 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:36:29.915252 sshd[5373]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:29.923449 systemd[1]: sshd@23-172.31.22.159:22-4.175.71.9:49772.service: Deactivated successfully. Apr 17 23:36:29.929468 systemd[1]: session-24.scope: Deactivated successfully. Apr 17 23:36:29.930583 systemd-logind[2097]: Session 24 logged out. Waiting for processes to exit. Apr 17 23:36:29.935150 systemd-logind[2097]: Removed session 24. Apr 17 23:36:30.021598 systemd[1]: run-containerd-runc-k8s.io-a82004ff32eab4623ab5923a225f4224a855242fd27da92b4bb1a5ac43a9d17b-runc.WzusTg.mount: Deactivated successfully. Apr 17 23:36:30.021904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a82004ff32eab4623ab5923a225f4224a855242fd27da92b4bb1a5ac43a9d17b-rootfs.mount: Deactivated successfully. Apr 17 23:36:30.089860 systemd[1]: Started sshd@24-172.31.22.159:22-4.175.71.9:49782.service - OpenSSH per-connection server daemon (4.175.71.9:49782). Apr 17 23:36:30.285675 containerd[2136]: time="2026-04-17T23:36:30.285583543Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:36:30.326420 containerd[2136]: time="2026-04-17T23:36:30.325563619Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"63d54264fca5ab1e477d6184fc479846cc7cfc3cf116a1aabf4705142adb9d24\"" Apr 17 23:36:30.329625 containerd[2136]: time="2026-04-17T23:36:30.329555131Z" level=info msg="StartContainer for \"63d54264fca5ab1e477d6184fc479846cc7cfc3cf116a1aabf4705142adb9d24\"" Apr 17 23:36:30.441492 containerd[2136]: time="2026-04-17T23:36:30.441420932Z" level=info msg="StartContainer for \"63d54264fca5ab1e477d6184fc479846cc7cfc3cf116a1aabf4705142adb9d24\" returns successfully" Apr 17 23:36:30.499963 containerd[2136]: time="2026-04-17T23:36:30.499810904Z" level=info msg="shim disconnected" id=63d54264fca5ab1e477d6184fc479846cc7cfc3cf116a1aabf4705142adb9d24 namespace=k8s.io Apr 17 23:36:30.499963 containerd[2136]: time="2026-04-17T23:36:30.499959596Z" level=warning msg="cleaning up after shim disconnected" id=63d54264fca5ab1e477d6184fc479846cc7cfc3cf116a1aabf4705142adb9d24 namespace=k8s.io Apr 17 23:36:30.500406 containerd[2136]: time="2026-04-17T23:36:30.499984748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:31.028199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63d54264fca5ab1e477d6184fc479846cc7cfc3cf116a1aabf4705142adb9d24-rootfs.mount: Deactivated successfully. Apr 17 23:36:31.137310 sshd[5542]: Accepted publickey for core from 4.175.71.9 port 49782 ssh2: RSA SHA256:Y4BPHWm1n8mK0R4k3Nc8+65YIxJqSgtKkzRPVXbpsws Apr 17 23:36:31.145020 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:31.158335 systemd-logind[2097]: New session 25 of user core. Apr 17 23:36:31.164565 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 17 23:36:31.299445 containerd[2136]: time="2026-04-17T23:36:31.299259224Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:36:31.337411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount51994914.mount: Deactivated successfully. Apr 17 23:36:31.350768 containerd[2136]: time="2026-04-17T23:36:31.350699012Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"56e5a30fd0630a7423eb5e149fc8e7222e18423fbd407b21d944d41f97a87f3b\"" Apr 17 23:36:31.357333 containerd[2136]: time="2026-04-17T23:36:31.357257420Z" level=info msg="StartContainer for \"56e5a30fd0630a7423eb5e149fc8e7222e18423fbd407b21d944d41f97a87f3b\"" Apr 17 23:36:31.463464 containerd[2136]: time="2026-04-17T23:36:31.463297689Z" level=info msg="StartContainer for \"56e5a30fd0630a7423eb5e149fc8e7222e18423fbd407b21d944d41f97a87f3b\" returns successfully" Apr 17 23:36:31.507443 containerd[2136]: time="2026-04-17T23:36:31.507134073Z" level=info msg="shim disconnected" id=56e5a30fd0630a7423eb5e149fc8e7222e18423fbd407b21d944d41f97a87f3b namespace=k8s.io Apr 17 23:36:31.507443 containerd[2136]: time="2026-04-17T23:36:31.507227649Z" level=warning msg="cleaning up after shim disconnected" id=56e5a30fd0630a7423eb5e149fc8e7222e18423fbd407b21d944d41f97a87f3b namespace=k8s.io Apr 17 23:36:31.507443 containerd[2136]: time="2026-04-17T23:36:31.507273561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:36:31.529780 containerd[2136]: time="2026-04-17T23:36:31.528360033Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:36:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:36:32.022036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56e5a30fd0630a7423eb5e149fc8e7222e18423fbd407b21d944d41f97a87f3b-rootfs.mount: Deactivated successfully. Apr 17 23:36:32.305978 containerd[2136]: time="2026-04-17T23:36:32.305142933Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:36:32.341716 containerd[2136]: time="2026-04-17T23:36:32.341637429Z" level=info msg="CreateContainer within sandbox \"d7c9963959c9cb0c3428a9c8c6b751059e78e04db76481a8da9e8af991fc3ff7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc4bb8cb65d5a1dff8574ae2b8fc1d5b7b2c9d4cf6439606d6de653f018e6423\"" Apr 17 23:36:32.343211 containerd[2136]: time="2026-04-17T23:36:32.343143681Z" level=info msg="StartContainer for \"fc4bb8cb65d5a1dff8574ae2b8fc1d5b7b2c9d4cf6439606d6de653f018e6423\"" Apr 17 23:36:32.472431 containerd[2136]: time="2026-04-17T23:36:32.472331890Z" level=info msg="StartContainer for \"fc4bb8cb65d5a1dff8574ae2b8fc1d5b7b2c9d4cf6439606d6de653f018e6423\" returns successfully" Apr 17 23:36:33.022661 systemd[1]: run-containerd-runc-k8s.io-fc4bb8cb65d5a1dff8574ae2b8fc1d5b7b2c9d4cf6439606d6de653f018e6423-runc.oVH90V.mount: Deactivated successfully. Apr 17 23:36:33.245077 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 17 23:36:33.351692 kubelet[3597]: I0417 23:36:33.351510 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-86w8t" podStartSLOduration=6.35148883 podStartE2EDuration="6.35148883s" podCreationTimestamp="2026-04-17 23:36:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:36:33.346783726 +0000 UTC m=+129.288916975" watchObservedRunningTime="2026-04-17 23:36:33.35148883 +0000 UTC m=+129.293622055" Apr 17 23:36:37.502552 systemd-networkd[1686]: lxc_health: Link UP Apr 17 23:36:37.509097 systemd-networkd[1686]: lxc_health: Gained carrier Apr 17 23:36:37.526035 (udev-worker)[6215]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:38.593075 systemd[1]: run-containerd-runc-k8s.io-fc4bb8cb65d5a1dff8574ae2b8fc1d5b7b2c9d4cf6439606d6de653f018e6423-runc.T6JfDv.mount: Deactivated successfully. Apr 17 23:36:39.226195 systemd-networkd[1686]: lxc_health: Gained IPv6LL Apr 17 23:36:41.250326 ntpd[2083]: Listen normally on 13 lxc_health [fe80::f426:2aff:feac:3b0d%14]:123 Apr 17 23:36:41.251038 ntpd[2083]: 17 Apr 23:36:41 ntpd[2083]: Listen normally on 13 lxc_health [fe80::f426:2aff:feac:3b0d%14]:123 Apr 17 23:36:43.400140 kubelet[3597]: E0417 23:36:43.398966 3597 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34502->127.0.0.1:46301: write tcp 127.0.0.1:34502->127.0.0.1:46301: write: broken pipe Apr 17 23:36:43.593651 sshd[5542]: pam_unix(sshd:session): session closed for user core Apr 17 23:36:43.605625 systemd[1]: sshd@24-172.31.22.159:22-4.175.71.9:49782.service: Deactivated successfully. Apr 17 23:36:43.617338 systemd[1]: session-25.scope: Deactivated successfully. Apr 17 23:36:43.619844 systemd-logind[2097]: Session 25 logged out. Waiting for processes to exit. Apr 17 23:36:43.623450 systemd-logind[2097]: Removed session 25. Apr 17 23:37:14.739746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-581a184ea8cdbb6aacb806c3c700c09a4bbac9d75f19e226b3348430f4616417-rootfs.mount: Deactivated successfully. Apr 17 23:37:14.746234 containerd[2136]: time="2026-04-17T23:37:14.746135968Z" level=info msg="shim disconnected" id=581a184ea8cdbb6aacb806c3c700c09a4bbac9d75f19e226b3348430f4616417 namespace=k8s.io Apr 17 23:37:14.746234 containerd[2136]: time="2026-04-17T23:37:14.746218084Z" level=warning msg="cleaning up after shim disconnected" id=581a184ea8cdbb6aacb806c3c700c09a4bbac9d75f19e226b3348430f4616417 namespace=k8s.io Apr 17 23:37:14.746234 containerd[2136]: time="2026-04-17T23:37:14.746240968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:37:15.439422 kubelet[3597]: I0417 23:37:15.438837 3597 scope.go:117] "RemoveContainer" containerID="581a184ea8cdbb6aacb806c3c700c09a4bbac9d75f19e226b3348430f4616417" Apr 17 23:37:15.444740 containerd[2136]: time="2026-04-17T23:37:15.444423843Z" level=info msg="CreateContainer within sandbox \"7b15d6307159a8ccb8dbd96f23f68683b41178256ccd42c75014ba499e1be14f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:37:15.469815 containerd[2136]: time="2026-04-17T23:37:15.469736776Z" level=info msg="CreateContainer within sandbox \"7b15d6307159a8ccb8dbd96f23f68683b41178256ccd42c75014ba499e1be14f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8c2e118634083a32d22565499e8ae901d9872a00c0565ba48fa6d520d48f491d\"" Apr 17 23:37:15.472726 containerd[2136]: time="2026-04-17T23:37:15.470927956Z" level=info msg="StartContainer for \"8c2e118634083a32d22565499e8ae901d9872a00c0565ba48fa6d520d48f491d\"" Apr 17 23:37:15.605379 containerd[2136]: time="2026-04-17T23:37:15.605301112Z" level=info msg="StartContainer for \"8c2e118634083a32d22565499e8ae901d9872a00c0565ba48fa6d520d48f491d\" returns successfully" Apr 17 23:37:17.658617 kubelet[3597]: E0417 23:37:17.658081 3597 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.159:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-159?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 17 23:37:19.717905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6eed672032fadd544756de16aa6d90e7657ebb6b596e2768f3331ac3d18e3e96-rootfs.mount: Deactivated successfully. Apr 17 23:37:19.723442 containerd[2136]: time="2026-04-17T23:37:19.723330429Z" level=info msg="shim disconnected" id=6eed672032fadd544756de16aa6d90e7657ebb6b596e2768f3331ac3d18e3e96 namespace=k8s.io Apr 17 23:37:19.723442 containerd[2136]: time="2026-04-17T23:37:19.723400845Z" level=warning msg="cleaning up after shim disconnected" id=6eed672032fadd544756de16aa6d90e7657ebb6b596e2768f3331ac3d18e3e96 namespace=k8s.io Apr 17 23:37:19.723442 containerd[2136]: time="2026-04-17T23:37:19.723420873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:37:20.461995 kubelet[3597]: I0417 23:37:20.461535 3597 scope.go:117] "RemoveContainer" containerID="6eed672032fadd544756de16aa6d90e7657ebb6b596e2768f3331ac3d18e3e96" Apr 17 23:37:20.465524 containerd[2136]: time="2026-04-17T23:37:20.465147476Z" level=info msg="CreateContainer within sandbox \"ddf7f49b2e948a8818774a9611066ce29effdb63538cd6927b2164d1cd3cb03e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 23:37:20.491776 containerd[2136]: time="2026-04-17T23:37:20.491639936Z" level=info msg="CreateContainer within sandbox \"ddf7f49b2e948a8818774a9611066ce29effdb63538cd6927b2164d1cd3cb03e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0b90007b715d048277fb54db171f11e0873c749e45832286e3bb1726ce8cf55d\"" Apr 17 23:37:20.492812 containerd[2136]: time="2026-04-17T23:37:20.492772436Z" level=info msg="StartContainer for \"0b90007b715d048277fb54db171f11e0873c749e45832286e3bb1726ce8cf55d\"" Apr 17 23:37:20.621613 containerd[2136]: time="2026-04-17T23:37:20.621539829Z" level=info msg="StartContainer for \"0b90007b715d048277fb54db171f11e0873c749e45832286e3bb1726ce8cf55d\" returns successfully"