Mar 7 00:53:58.234717 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 7 00:53:58.234767 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 00:53:58.234796 kernel: KASLR disabled due to lack of seed Mar 7 00:53:58.234814 kernel: efi: EFI v2.7 by EDK II Mar 7 00:53:58.234831 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Mar 7 00:53:58.234848 kernel: ACPI: Early table checksum verification disabled Mar 7 00:53:58.234867 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 7 00:53:58.234883 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 7 00:53:58.234901 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 7 00:53:58.234917 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 7 00:53:58.234940 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 7 00:53:58.234957 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 7 00:53:58.234974 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 7 00:53:58.234991 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 7 00:53:58.235011 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 7 00:53:58.235032 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 7 00:53:58.235051 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 7 00:53:58.235069 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 7 00:53:58.235126 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 7 00:53:58.235150 kernel: printk: bootconsole [uart0] enabled Mar 7 00:53:58.235168 kernel: NUMA: Failed to initialise from firmware Mar 7 00:53:58.235187 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 7 00:53:58.235205 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 7 00:53:58.235223 kernel: Zone ranges: Mar 7 00:53:58.235240 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 7 00:53:58.235258 kernel: DMA32 empty Mar 7 00:53:58.235282 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 7 00:53:58.235300 kernel: Movable zone start for each node Mar 7 00:53:58.235317 kernel: Early memory node ranges Mar 7 00:53:58.235335 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 7 00:53:58.235352 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 7 00:53:58.235370 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 7 00:53:58.235388 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 7 00:53:58.235405 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 7 00:53:58.235422 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 7 00:53:58.235440 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 7 00:53:58.235457 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 7 00:53:58.235475 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 7 00:53:58.235497 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 7 00:53:58.235515 kernel: psci: probing for conduit method from ACPI. Mar 7 00:53:58.235541 kernel: psci: PSCIv1.0 detected in firmware. Mar 7 00:53:58.235559 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:53:58.235578 kernel: psci: Trusted OS migration not required Mar 7 00:53:58.235601 kernel: psci: SMC Calling Convention v1.1 Mar 7 00:53:58.235620 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 7 00:53:58.235640 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 00:53:58.235658 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 00:53:58.235677 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:53:58.235696 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:53:58.235714 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:53:58.235732 kernel: CPU features: detected: Spectre-v2 Mar 7 00:53:58.235751 kernel: CPU features: detected: Spectre-v3a Mar 7 00:53:58.235769 kernel: CPU features: detected: Spectre-BHB Mar 7 00:53:58.235788 kernel: CPU features: detected: ARM erratum 1742098 Mar 7 00:53:58.235810 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 7 00:53:58.235829 kernel: alternatives: applying boot alternatives Mar 7 00:53:58.235850 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:53:58.235869 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:53:58.235887 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:53:58.235906 kernel: Fallback order for Node 0: 0 Mar 7 00:53:58.235924 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 7 00:53:58.235943 kernel: Policy zone: Normal Mar 7 00:53:58.235961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:53:58.235980 kernel: software IO TLB: area num 2. Mar 7 00:53:58.235998 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 7 00:53:58.236023 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Mar 7 00:53:58.236042 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:53:58.236061 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:53:58.236083 kernel: rcu: RCU event tracing is enabled. Mar 7 00:53:58.238152 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:53:58.238172 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:53:58.238191 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:53:58.238210 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:53:58.238229 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:53:58.238248 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:53:58.238266 kernel: GICv3: 96 SPIs implemented Mar 7 00:53:58.238292 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:53:58.238311 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:53:58.238329 kernel: GICv3: GICv3 features: 16 PPIs Mar 7 00:53:58.238347 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 7 00:53:58.238366 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 7 00:53:58.238384 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 7 00:53:58.238403 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 7 00:53:58.238422 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 7 00:53:58.238440 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 7 00:53:58.238459 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 7 00:53:58.238477 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:53:58.238495 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 7 00:53:58.238519 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 7 00:53:58.238538 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 7 00:53:58.238556 kernel: Console: colour dummy device 80x25 Mar 7 00:53:58.238575 kernel: printk: console [tty1] enabled Mar 7 00:53:58.238594 kernel: ACPI: Core revision 20230628 Mar 7 00:53:58.238613 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 7 00:53:58.238632 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:53:58.238651 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 00:53:58.238670 kernel: landlock: Up and running. Mar 7 00:53:58.238692 kernel: SELinux: Initializing. Mar 7 00:53:58.238711 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:53:58.238730 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:53:58.238749 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:53:58.238768 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:53:58.238787 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:53:58.238806 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:53:58.238825 kernel: Platform MSI: ITS@0x10080000 domain created Mar 7 00:53:58.238843 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 7 00:53:58.238866 kernel: Remapping and enabling EFI services. Mar 7 00:53:58.238885 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:53:58.238903 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:53:58.238922 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 7 00:53:58.238941 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 7 00:53:58.238959 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 7 00:53:58.238978 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:53:58.238996 kernel: SMP: Total of 2 processors activated. Mar 7 00:53:58.239015 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:53:58.239037 kernel: CPU features: detected: 32-bit EL1 Support Mar 7 00:53:58.239056 kernel: CPU features: detected: CRC32 instructions Mar 7 00:53:58.239075 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:53:58.239127 kernel: alternatives: applying system-wide alternatives Mar 7 00:53:58.239153 kernel: devtmpfs: initialized Mar 7 00:53:58.239173 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:53:58.239193 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:53:58.239212 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:53:58.239232 kernel: SMBIOS 3.0.0 present. Mar 7 00:53:58.239256 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 7 00:53:58.239275 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:53:58.239295 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:53:58.239314 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:53:58.239334 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:53:58.239354 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:53:58.239373 kernel: audit: type=2000 audit(0.285:1): state=initialized audit_enabled=0 res=1 Mar 7 00:53:58.239392 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:53:58.239416 kernel: cpuidle: using governor menu Mar 7 00:53:58.239436 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:53:58.239455 kernel: ASID allocator initialised with 65536 entries Mar 7 00:53:58.239474 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:53:58.239494 kernel: Serial: AMBA PL011 UART driver Mar 7 00:53:58.239513 kernel: Modules: 17488 pages in range for non-PLT usage Mar 7 00:53:58.239532 kernel: Modules: 509008 pages in range for PLT usage Mar 7 00:53:58.239552 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:53:58.239572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:53:58.239595 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:53:58.239615 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:53:58.239635 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:53:58.239655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:53:58.239674 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:53:58.239694 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:53:58.239713 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:53:58.239733 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:53:58.239752 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:53:58.239776 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:53:58.239795 kernel: ACPI: Interpreter enabled Mar 7 00:53:58.239815 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:53:58.239834 kernel: ACPI: MCFG table detected, 1 entries Mar 7 00:53:58.239853 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 7 00:53:58.242243 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 00:53:58.242504 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 7 00:53:58.242714 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 7 00:53:58.242937 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 7 00:53:58.243200 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 7 00:53:58.243230 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 7 00:53:58.243250 kernel: acpiphp: Slot [1] registered Mar 7 00:53:58.243270 kernel: acpiphp: Slot [2] registered Mar 7 00:53:58.243290 kernel: acpiphp: Slot [3] registered Mar 7 00:53:58.243309 kernel: acpiphp: Slot [4] registered Mar 7 00:53:58.243328 kernel: acpiphp: Slot [5] registered Mar 7 00:53:58.243355 kernel: acpiphp: Slot [6] registered Mar 7 00:53:58.243375 kernel: acpiphp: Slot [7] registered Mar 7 00:53:58.243395 kernel: acpiphp: Slot [8] registered Mar 7 00:53:58.243414 kernel: acpiphp: Slot [9] registered Mar 7 00:53:58.243433 kernel: acpiphp: Slot [10] registered Mar 7 00:53:58.243452 kernel: acpiphp: Slot [11] registered Mar 7 00:53:58.243471 kernel: acpiphp: Slot [12] registered Mar 7 00:53:58.243491 kernel: acpiphp: Slot [13] registered Mar 7 00:53:58.243510 kernel: acpiphp: Slot [14] registered Mar 7 00:53:58.243529 kernel: acpiphp: Slot [15] registered Mar 7 00:53:58.243554 kernel: acpiphp: Slot [16] registered Mar 7 00:53:58.243573 kernel: acpiphp: Slot [17] registered Mar 7 00:53:58.243592 kernel: acpiphp: Slot [18] registered Mar 7 00:53:58.243611 kernel: acpiphp: Slot [19] registered Mar 7 00:53:58.243630 kernel: acpiphp: Slot [20] registered Mar 7 00:53:58.243650 kernel: acpiphp: Slot [21] registered Mar 7 00:53:58.243669 kernel: acpiphp: Slot [22] registered Mar 7 00:53:58.243688 kernel: acpiphp: Slot [23] registered Mar 7 00:53:58.243707 kernel: acpiphp: Slot [24] registered Mar 7 00:53:58.243731 kernel: acpiphp: Slot [25] registered Mar 7 00:53:58.243751 kernel: acpiphp: Slot [26] registered Mar 7 00:53:58.243770 kernel: acpiphp: Slot [27] registered Mar 7 00:53:58.243789 kernel: acpiphp: Slot [28] registered Mar 7 00:53:58.243808 kernel: acpiphp: Slot [29] registered Mar 7 00:53:58.243827 kernel: acpiphp: Slot [30] registered Mar 7 00:53:58.243846 kernel: acpiphp: Slot [31] registered Mar 7 00:53:58.243865 kernel: PCI host bridge to bus 0000:00 Mar 7 00:53:58.246156 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 7 00:53:58.246433 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 7 00:53:58.246622 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 7 00:53:58.246808 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 7 00:53:58.247051 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 7 00:53:58.247322 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 7 00:53:58.247544 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 7 00:53:58.247782 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 7 00:53:58.247997 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 7 00:53:58.250299 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 7 00:53:58.250554 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 7 00:53:58.250775 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 7 00:53:58.250991 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 7 00:53:58.251243 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 7 00:53:58.251479 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 7 00:53:58.251680 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 7 00:53:58.251873 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 7 00:53:58.252069 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 7 00:53:58.253955 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 7 00:53:58.253987 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 7 00:53:58.254008 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 7 00:53:58.254029 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 7 00:53:58.254060 kernel: iommu: Default domain type: Translated Mar 7 00:53:58.254081 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:53:58.254136 kernel: efivars: Registered efivars operations Mar 7 00:53:58.254157 kernel: vgaarb: loaded Mar 7 00:53:58.254176 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:53:58.254196 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:53:58.254215 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:53:58.254235 kernel: pnp: PnP ACPI init Mar 7 00:53:58.254503 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 7 00:53:58.254539 kernel: pnp: PnP ACPI: found 1 devices Mar 7 00:53:58.254559 kernel: NET: Registered PF_INET protocol family Mar 7 00:53:58.254579 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:53:58.254598 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:53:58.254618 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:53:58.254638 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:53:58.254657 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:53:58.254677 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:53:58.254701 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:53:58.254721 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:53:58.254740 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:53:58.254759 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:53:58.254778 kernel: kvm [1]: HYP mode not available Mar 7 00:53:58.254798 kernel: Initialise system trusted keyrings Mar 7 00:53:58.254817 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:53:58.254836 kernel: Key type asymmetric registered Mar 7 00:53:58.254855 kernel: Asymmetric key parser 'x509' registered Mar 7 00:53:58.254879 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 00:53:58.254899 kernel: io scheduler mq-deadline registered Mar 7 00:53:58.254918 kernel: io scheduler kyber registered Mar 7 00:53:58.254937 kernel: io scheduler bfq registered Mar 7 00:53:58.255182 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 7 00:53:58.255212 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 7 00:53:58.255232 kernel: ACPI: button: Power Button [PWRB] Mar 7 00:53:58.255252 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 7 00:53:58.255272 kernel: ACPI: button: Sleep Button [SLPB] Mar 7 00:53:58.255298 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:53:58.255319 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 7 00:53:58.255539 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 7 00:53:58.255566 kernel: printk: console [ttyS0] disabled Mar 7 00:53:58.255586 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 7 00:53:58.255606 kernel: printk: console [ttyS0] enabled Mar 7 00:53:58.255626 kernel: printk: bootconsole [uart0] disabled Mar 7 00:53:58.255645 kernel: thunder_xcv, ver 1.0 Mar 7 00:53:58.255665 kernel: thunder_bgx, ver 1.0 Mar 7 00:53:58.255690 kernel: nicpf, ver 1.0 Mar 7 00:53:58.255709 kernel: nicvf, ver 1.0 Mar 7 00:53:58.255936 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:53:58.258237 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:53:57 UTC (1772844837) Mar 7 00:53:58.258284 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:53:58.258306 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 7 00:53:58.258327 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 00:53:58.258348 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:53:58.258378 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:53:58.258399 kernel: Segment Routing with IPv6 Mar 7 00:53:58.258418 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:53:58.258438 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:53:58.258458 kernel: Key type dns_resolver registered Mar 7 00:53:58.258477 kernel: registered taskstats version 1 Mar 7 00:53:58.258497 kernel: Loading compiled-in X.509 certificates Mar 7 00:53:58.258517 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 00:53:58.258536 kernel: Key type .fscrypt registered Mar 7 00:53:58.258561 kernel: Key type fscrypt-provisioning registered Mar 7 00:53:58.258580 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:53:58.258600 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:53:58.258619 kernel: ima: No architecture policies found Mar 7 00:53:58.258639 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:53:58.258658 kernel: clk: Disabling unused clocks Mar 7 00:53:58.258678 kernel: Freeing unused kernel memory: 39424K Mar 7 00:53:58.258698 kernel: Run /init as init process Mar 7 00:53:58.258717 kernel: with arguments: Mar 7 00:53:58.258741 kernel: /init Mar 7 00:53:58.258761 kernel: with environment: Mar 7 00:53:58.258780 kernel: HOME=/ Mar 7 00:53:58.258799 kernel: TERM=linux Mar 7 00:53:58.258824 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:53:58.258849 systemd[1]: Detected virtualization amazon. Mar 7 00:53:58.258871 systemd[1]: Detected architecture arm64. Mar 7 00:53:58.258893 systemd[1]: Running in initrd. Mar 7 00:53:58.258919 systemd[1]: No hostname configured, using default hostname. Mar 7 00:53:58.258940 systemd[1]: Hostname set to . Mar 7 00:53:58.258962 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:53:58.258983 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:53:58.259004 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:53:58.259025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:53:58.259047 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:53:58.259069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:53:58.259132 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:53:58.259158 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:53:58.259184 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:53:58.259206 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:53:58.259228 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:53:58.259250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:53:58.259278 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:53:58.259300 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:53:58.259321 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:53:58.259342 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:53:58.259363 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:53:58.259384 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:53:58.259405 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:53:58.259426 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:53:58.259448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:53:58.259475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:53:58.259497 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:53:58.259518 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:53:58.259562 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:53:58.259585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:53:58.259607 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:53:58.259629 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:53:58.259650 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:53:58.259671 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:53:58.259742 systemd-journald[251]: Collecting audit messages is disabled. Mar 7 00:53:58.259788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:53:58.259810 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:53:58.259833 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:53:58.259860 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:53:58.259883 systemd-journald[251]: Journal started Mar 7 00:53:58.259925 systemd-journald[251]: Runtime Journal (/run/log/journal/ec29ce1f93d97336d69875f10f5ddc5c) is 8.0M, max 75.3M, 67.3M free. Mar 7 00:53:58.256152 systemd-modules-load[252]: Inserted module 'overlay' Mar 7 00:53:58.276968 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:53:58.285895 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:53:58.295117 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:53:58.298877 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:53:58.316080 kernel: Bridge firewalling registered Mar 7 00:53:58.313484 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:53:58.314056 systemd-modules-load[252]: Inserted module 'br_netfilter' Mar 7 00:53:58.323285 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:53:58.339612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:53:58.347885 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:53:58.356356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:53:58.366402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:53:58.400773 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:53:58.409488 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:53:58.430447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:53:58.433984 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:53:58.437536 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:53:58.458493 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:53:58.480685 dracut-cmdline[285]: dracut-dracut-053 Mar 7 00:53:58.490198 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:53:58.550027 systemd-resolved[291]: Positive Trust Anchors: Mar 7 00:53:58.550063 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:53:58.550148 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:53:58.667153 kernel: SCSI subsystem initialized Mar 7 00:53:58.676115 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:53:58.688126 kernel: iscsi: registered transport (tcp) Mar 7 00:53:58.710370 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:53:58.710444 kernel: QLogic iSCSI HBA Driver Mar 7 00:53:58.790126 kernel: random: crng init done Mar 7 00:53:58.790618 systemd-resolved[291]: Defaulting to hostname 'linux'. Mar 7 00:53:58.794638 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:53:58.797138 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:53:58.827175 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:53:58.843179 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:53:58.875957 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:53:58.876032 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:53:58.877128 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 00:53:58.944137 kernel: raid6: neonx8 gen() 6750 MB/s Mar 7 00:53:58.961125 kernel: raid6: neonx4 gen() 6563 MB/s Mar 7 00:53:58.978124 kernel: raid6: neonx2 gen() 5465 MB/s Mar 7 00:53:58.995124 kernel: raid6: neonx1 gen() 3979 MB/s Mar 7 00:53:59.012123 kernel: raid6: int64x8 gen() 3823 MB/s Mar 7 00:53:59.029124 kernel: raid6: int64x4 gen() 3725 MB/s Mar 7 00:53:59.046123 kernel: raid6: int64x2 gen() 3616 MB/s Mar 7 00:53:59.064168 kernel: raid6: int64x1 gen() 2759 MB/s Mar 7 00:53:59.064215 kernel: raid6: using algorithm neonx8 gen() 6750 MB/s Mar 7 00:53:59.083135 kernel: raid6: .... xor() 4805 MB/s, rmw enabled Mar 7 00:53:59.083214 kernel: raid6: using neon recovery algorithm Mar 7 00:53:59.091127 kernel: xor: measuring software checksum speed Mar 7 00:53:59.093479 kernel: 8regs : 10249 MB/sec Mar 7 00:53:59.093512 kernel: 32regs : 11913 MB/sec Mar 7 00:53:59.094766 kernel: arm64_neon : 9505 MB/sec Mar 7 00:53:59.094809 kernel: xor: using function: 32regs (11913 MB/sec) Mar 7 00:53:59.181150 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:53:59.202170 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:53:59.220378 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:53:59.258134 systemd-udevd[473]: Using default interface naming scheme 'v255'. Mar 7 00:53:59.266815 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:53:59.281391 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:53:59.322009 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Mar 7 00:53:59.379828 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:53:59.390422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:53:59.517625 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:53:59.530407 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:53:59.566324 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:53:59.571777 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:53:59.572021 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:53:59.572713 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:53:59.590102 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:53:59.626840 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:53:59.723008 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 7 00:53:59.723079 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 7 00:53:59.723954 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:53:59.724275 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:53:59.738971 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 7 00:53:59.739324 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 7 00:53:59.737585 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:53:59.753162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:53:59.753496 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:53:59.768668 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:92:db:fd:5d:99 Mar 7 00:53:59.757389 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:53:59.775889 (udev-worker)[534]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:53:59.788537 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 7 00:53:59.788579 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 7 00:53:59.782670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:53:59.800530 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 7 00:53:59.814215 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 00:53:59.814324 kernel: GPT:9289727 != 33554431 Mar 7 00:53:59.814392 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 00:53:59.814459 kernel: GPT:9289727 != 33554431 Mar 7 00:53:59.814526 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 00:53:59.814555 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:53:59.827789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:53:59.837475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:53:59.888223 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:53:59.923234 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (527) Mar 7 00:53:59.943165 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (531) Mar 7 00:53:59.976521 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 7 00:54:00.032952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 00:54:00.063392 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 7 00:54:00.090830 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 7 00:54:00.097453 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 7 00:54:00.111394 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:54:00.124760 disk-uuid[662]: Primary Header is updated. Mar 7 00:54:00.124760 disk-uuid[662]: Secondary Entries is updated. Mar 7 00:54:00.124760 disk-uuid[662]: Secondary Header is updated. Mar 7 00:54:00.132141 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:00.160130 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:00.167124 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:01.169165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:01.170813 disk-uuid[663]: The operation has completed successfully. Mar 7 00:54:01.348777 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:54:01.348996 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:54:01.407408 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:54:01.429784 sh[1006]: Success Mar 7 00:54:01.455156 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 00:54:01.561547 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:54:01.569190 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:54:01.577237 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:54:01.619917 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 00:54:01.619985 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:01.620013 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 00:54:01.621864 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 00:54:01.623268 kernel: BTRFS info (device dm-0): using free space tree Mar 7 00:54:01.726126 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 00:54:01.749860 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:54:01.759749 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:54:01.769432 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:54:01.779442 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:54:01.826753 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:01.826825 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:01.828645 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:01.846949 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:01.862720 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 00:54:01.865633 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:01.874560 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:54:01.888454 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:54:01.980186 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:54:01.993522 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:54:02.056251 systemd-networkd[1198]: lo: Link UP Mar 7 00:54:02.056272 systemd-networkd[1198]: lo: Gained carrier Mar 7 00:54:02.058773 systemd-networkd[1198]: Enumeration completed Mar 7 00:54:02.058922 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:54:02.059875 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:02.059883 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:54:02.065263 systemd[1]: Reached target network.target - Network. Mar 7 00:54:02.070912 systemd-networkd[1198]: eth0: Link UP Mar 7 00:54:02.070920 systemd-networkd[1198]: eth0: Gained carrier Mar 7 00:54:02.070940 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:02.105193 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.26.221/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 00:54:02.364928 ignition[1127]: Ignition 2.19.0 Mar 7 00:54:02.364956 ignition[1127]: Stage: fetch-offline Mar 7 00:54:02.371029 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:54:02.366600 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:02.366625 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:02.367525 ignition[1127]: Ignition finished successfully Mar 7 00:54:02.387361 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:54:02.422335 ignition[1209]: Ignition 2.19.0 Mar 7 00:54:02.422861 ignition[1209]: Stage: fetch Mar 7 00:54:02.423537 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:02.423562 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:02.423715 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:02.444768 ignition[1209]: PUT result: OK Mar 7 00:54:02.448481 ignition[1209]: parsed url from cmdline: "" Mar 7 00:54:02.448497 ignition[1209]: no config URL provided Mar 7 00:54:02.448515 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:54:02.448541 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:54:02.448572 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:02.450497 ignition[1209]: PUT result: OK Mar 7 00:54:02.450574 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 7 00:54:02.453346 ignition[1209]: GET result: OK Mar 7 00:54:02.453479 ignition[1209]: parsing config with SHA512: 0b01ff3aad8d6c0f4d84eb87ee3e0e3892340d4fa128313225707097dce5431e3f808a0d8c2c14c6555958c54750d52ee9f9e18f60d636e221de938d68ad74ce Mar 7 00:54:02.466634 unknown[1209]: fetched base config from "system" Mar 7 00:54:02.466858 unknown[1209]: fetched base config from "system" Mar 7 00:54:02.467595 ignition[1209]: fetch: fetch complete Mar 7 00:54:02.466873 unknown[1209]: fetched user config from "aws" Mar 7 00:54:02.467631 ignition[1209]: fetch: fetch passed Mar 7 00:54:02.467722 ignition[1209]: Ignition finished successfully Mar 7 00:54:02.480590 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:54:02.490419 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:54:02.522226 ignition[1215]: Ignition 2.19.0 Mar 7 00:54:02.522804 ignition[1215]: Stage: kargs Mar 7 00:54:02.523602 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:02.523629 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:02.523797 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:02.534174 ignition[1215]: PUT result: OK Mar 7 00:54:02.541312 ignition[1215]: kargs: kargs passed Mar 7 00:54:02.541427 ignition[1215]: Ignition finished successfully Mar 7 00:54:02.548158 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:54:02.565515 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:54:02.590915 ignition[1221]: Ignition 2.19.0 Mar 7 00:54:02.591695 ignition[1221]: Stage: disks Mar 7 00:54:02.592558 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:02.592584 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:02.592748 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:02.601923 ignition[1221]: PUT result: OK Mar 7 00:54:02.606557 ignition[1221]: disks: disks passed Mar 7 00:54:02.606869 ignition[1221]: Ignition finished successfully Mar 7 00:54:02.615419 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:54:02.621290 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:54:02.623960 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:54:02.627018 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:54:02.634578 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:54:02.637006 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:54:02.653382 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:54:02.697578 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 00:54:02.703966 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:54:02.716495 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:54:02.818431 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 00:54:02.819845 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:54:02.824997 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:54:02.846245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:54:02.854368 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:54:02.859531 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 00:54:02.859632 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:54:02.859757 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:54:02.885210 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1249) Mar 7 00:54:02.889290 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:02.889350 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:02.891375 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:02.895337 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:54:02.909596 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:54:02.925114 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:02.928399 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:54:03.211297 systemd-networkd[1198]: eth0: Gained IPv6LL Mar 7 00:54:03.248130 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:54:03.271481 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:54:03.282738 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:54:03.292255 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:54:03.548058 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:54:03.561468 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:54:03.569412 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:54:03.585473 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:54:03.590261 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:03.634556 ignition[1362]: INFO : Ignition 2.19.0 Mar 7 00:54:03.634556 ignition[1362]: INFO : Stage: mount Mar 7 00:54:03.647697 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:03.647697 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:03.647697 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:03.647697 ignition[1362]: INFO : PUT result: OK Mar 7 00:54:03.635660 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:54:03.669243 ignition[1362]: INFO : mount: mount passed Mar 7 00:54:03.669243 ignition[1362]: INFO : Ignition finished successfully Mar 7 00:54:03.654001 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:54:03.674621 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:54:03.829440 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:54:03.861146 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Mar 7 00:54:03.865918 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:03.865994 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:03.866027 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:03.875153 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:03.877506 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:54:03.916354 ignition[1392]: INFO : Ignition 2.19.0 Mar 7 00:54:03.916354 ignition[1392]: INFO : Stage: files Mar 7 00:54:03.921074 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:03.921074 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:03.921074 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:03.930498 ignition[1392]: INFO : PUT result: OK Mar 7 00:54:03.935382 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:54:03.939728 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:54:03.939728 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:54:03.965296 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:54:03.969208 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:54:03.973343 unknown[1392]: wrote ssh authorized keys file for user: core Mar 7 00:54:03.975983 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:54:03.982074 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:54:03.986683 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:54:04.082136 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 00:54:04.260034 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:54:04.260034 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 00:54:04.260034 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 7 00:54:04.462262 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 00:54:04.574143 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 00:54:04.574143 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:54:04.574143 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:54:04.574143 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:54:04.574143 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:54:04.597050 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 00:54:05.026464 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 00:54:05.395814 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:54:05.395814 ignition[1392]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 7 00:54:05.403476 ignition[1392]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:54:05.403476 ignition[1392]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:54:05.403476 ignition[1392]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 7 00:54:05.403476 ignition[1392]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:54:05.403476 ignition[1392]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:54:05.403476 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:54:05.403476 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:54:05.403476 ignition[1392]: INFO : files: files passed Mar 7 00:54:05.403476 ignition[1392]: INFO : Ignition finished successfully Mar 7 00:54:05.414785 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:54:05.439541 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:54:05.457705 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:54:05.469410 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:54:05.469683 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:54:05.503662 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:05.507636 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:05.511364 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:05.518669 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:54:05.526697 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:54:05.540447 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:54:05.605051 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:54:05.605623 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:54:05.617702 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:54:05.620754 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:54:05.632543 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:54:05.645382 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:54:05.682179 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:54:05.691460 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:54:05.716996 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:54:05.722733 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:54:05.726616 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:54:05.731413 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:54:05.731718 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:54:05.741749 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:54:05.745397 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:54:05.750509 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:54:05.757065 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:54:05.762818 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:54:05.771621 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:54:05.778697 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:54:05.782039 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:54:05.789715 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:54:05.792232 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:54:05.794658 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:54:05.794907 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:54:05.805469 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:54:05.805804 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:54:05.813015 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:54:05.815426 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:54:05.815671 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:54:05.815925 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:54:05.826714 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:54:05.826978 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:54:05.832256 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:54:05.832460 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:54:05.847815 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:54:05.853651 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:54:05.853940 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:54:05.870695 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:54:05.873323 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:54:05.873731 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:54:05.883023 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:54:05.883287 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:54:05.907207 ignition[1443]: INFO : Ignition 2.19.0 Mar 7 00:54:05.907207 ignition[1443]: INFO : Stage: umount Mar 7 00:54:05.915313 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:05.915313 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:05.915313 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:05.915313 ignition[1443]: INFO : PUT result: OK Mar 7 00:54:05.909762 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:54:05.933246 ignition[1443]: INFO : umount: umount passed Mar 7 00:54:05.933246 ignition[1443]: INFO : Ignition finished successfully Mar 7 00:54:05.911842 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:54:05.931895 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:54:05.932199 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:54:05.947073 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:54:05.947258 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:54:05.958312 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:54:05.958438 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:54:05.967237 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:54:05.968857 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:54:05.976229 systemd[1]: Stopped target network.target - Network. Mar 7 00:54:05.978366 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:54:05.978496 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:54:05.981332 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:54:05.983412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:54:05.987213 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:54:05.990646 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:54:05.992993 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:54:06.012528 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:54:06.012620 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:54:06.015482 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:54:06.015557 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:54:06.017834 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:54:06.017925 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:54:06.020175 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:54:06.020261 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:54:06.023516 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:54:06.026750 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:54:06.034940 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:54:06.036195 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:54:06.038949 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:54:06.041165 systemd-networkd[1198]: eth0: DHCPv6 lease lost Mar 7 00:54:06.043990 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:54:06.044177 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:54:06.055613 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:54:06.055854 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:54:06.063785 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:54:06.064157 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:54:06.082493 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:54:06.082639 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:54:06.105284 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:54:06.110626 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:54:06.110925 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:54:06.119538 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:54:06.119660 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:06.122296 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:54:06.122383 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:54:06.125165 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:54:06.125265 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:54:06.128551 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:54:06.170681 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:54:06.170998 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:54:06.177496 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:54:06.177639 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:54:06.188741 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:54:06.188856 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:54:06.192900 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:54:06.193005 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:54:06.201771 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:54:06.201882 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:54:06.213524 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:54:06.213637 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:06.228417 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:54:06.230964 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:54:06.231106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:54:06.234326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:54:06.234418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:06.238158 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:54:06.238344 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:54:06.275985 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:54:06.276252 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:54:06.282557 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:54:06.297375 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:54:06.326688 systemd[1]: Switching root. Mar 7 00:54:06.374847 systemd-journald[251]: Journal stopped Mar 7 00:54:08.593383 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Mar 7 00:54:08.593535 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:54:08.593580 kernel: SELinux: policy capability open_perms=1 Mar 7 00:54:08.593618 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:54:08.593649 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:54:08.593680 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:54:08.593710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:54:08.593742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:54:08.593774 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:54:08.593806 kernel: audit: type=1403 audit(1772844846.775:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:54:08.593842 systemd[1]: Successfully loaded SELinux policy in 58.707ms. Mar 7 00:54:08.593891 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.633ms. Mar 7 00:54:08.593930 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:54:08.593963 systemd[1]: Detected virtualization amazon. Mar 7 00:54:08.593997 systemd[1]: Detected architecture arm64. Mar 7 00:54:08.594033 systemd[1]: Detected first boot. Mar 7 00:54:08.594068 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:54:08.596307 zram_generator::config[1485]: No configuration found. Mar 7 00:54:08.596367 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:54:08.599309 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 00:54:08.599364 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 00:54:08.599403 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 00:54:08.599440 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:54:08.599476 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:54:08.599510 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:54:08.599542 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:54:08.599573 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:54:08.599608 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:54:08.599644 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:54:08.599683 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:54:08.599716 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:54:08.599747 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:54:08.599778 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:54:08.599811 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:54:08.599857 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:54:08.599893 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:54:08.599925 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 00:54:08.599960 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:54:08.599993 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 00:54:08.600023 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 00:54:08.600066 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 00:54:08.607486 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:54:08.607559 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:54:08.607596 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:54:08.607627 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:54:08.607668 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:54:08.607699 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:54:08.607731 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:54:08.607762 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:54:08.607794 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:54:08.607828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:54:08.607859 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:54:08.607893 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:54:08.607936 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:54:08.607972 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:54:08.608008 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:54:08.608045 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:54:08.608079 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:54:08.608140 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:54:08.608175 systemd[1]: Reached target machines.target - Containers. Mar 7 00:54:08.608206 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:54:08.608238 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:08.608274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:54:08.608312 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:54:08.608345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:54:08.608377 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:54:08.608412 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:54:08.608445 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:54:08.608479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:54:08.608514 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:54:08.608548 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 00:54:08.608587 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 00:54:08.608621 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 00:54:08.608656 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 00:54:08.608688 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:54:08.608720 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:54:08.608752 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:54:08.608801 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:54:08.608840 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:54:08.608877 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 00:54:08.608916 systemd[1]: Stopped verity-setup.service. Mar 7 00:54:08.608949 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:54:08.608981 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:54:08.609013 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:54:08.609045 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:54:08.609075 kernel: loop: module loaded Mar 7 00:54:08.618922 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:54:08.624596 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:54:08.624641 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:54:08.624683 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:54:08.624717 kernel: fuse: init (API version 7.39) Mar 7 00:54:08.624752 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:54:08.624801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:54:08.624842 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:54:08.624879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:54:08.624912 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:54:08.624942 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:54:08.624974 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:54:08.625005 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:54:08.625035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:54:08.625065 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:54:08.625125 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:54:08.625161 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:54:08.625196 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:54:08.625277 systemd-journald[1567]: Collecting audit messages is disabled. Mar 7 00:54:08.625339 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:54:08.625378 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:54:08.625410 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:54:08.625444 kernel: ACPI: bus type drm_connector registered Mar 7 00:54:08.625480 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:54:08.625512 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:54:08.625546 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:54:08.625581 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:54:08.625613 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:54:08.625646 systemd-journald[1567]: Journal started Mar 7 00:54:08.625694 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec29ce1f93d97336d69875f10f5ddc5c) is 8.0M, max 75.3M, 67.3M free. Mar 7 00:54:07.923528 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:54:08.637340 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:54:07.951339 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 7 00:54:07.952202 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 00:54:08.633645 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:54:08.633689 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:54:08.638057 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 00:54:08.647548 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:54:08.662539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:54:08.665476 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:08.680530 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:54:08.687475 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:54:08.690900 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:54:08.694135 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:54:08.714615 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:54:08.721870 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:54:08.729698 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:54:08.801624 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec29ce1f93d97336d69875f10f5ddc5c is 112.617ms for 902 entries. Mar 7 00:54:08.801624 systemd-journald[1567]: System Journal (/var/log/journal/ec29ce1f93d97336d69875f10f5ddc5c) is 8.0M, max 195.6M, 187.6M free. Mar 7 00:54:08.968607 systemd-journald[1567]: Received client request to flush runtime journal. Mar 7 00:54:08.968700 kernel: loop0: detected capacity change from 0 to 114432 Mar 7 00:54:08.968761 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:54:08.970419 kernel: loop1: detected capacity change from 0 to 209336 Mar 7 00:54:08.816672 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:54:08.833175 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:54:08.838545 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:54:08.851357 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 00:54:08.862641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:08.935431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:54:08.956690 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 00:54:08.982835 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:54:09.003763 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:54:09.009498 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 00:54:09.022829 udevadm[1629]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 00:54:09.056240 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:54:09.068522 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:54:09.079326 kernel: loop2: detected capacity change from 0 to 52536 Mar 7 00:54:09.136975 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Mar 7 00:54:09.137017 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Mar 7 00:54:09.160346 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:54:09.202138 kernel: loop3: detected capacity change from 0 to 114328 Mar 7 00:54:09.266672 kernel: loop4: detected capacity change from 0 to 114432 Mar 7 00:54:09.292149 kernel: loop5: detected capacity change from 0 to 209336 Mar 7 00:54:09.325140 kernel: loop6: detected capacity change from 0 to 52536 Mar 7 00:54:09.353145 kernel: loop7: detected capacity change from 0 to 114328 Mar 7 00:54:09.368610 (sd-merge)[1642]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 7 00:54:09.369761 (sd-merge)[1642]: Merged extensions into '/usr'. Mar 7 00:54:09.379750 systemd[1]: Reloading requested from client PID 1598 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:54:09.379796 systemd[1]: Reloading... Mar 7 00:54:09.573165 zram_generator::config[1669]: No configuration found. Mar 7 00:54:09.634003 ldconfig[1592]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:54:09.847476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:09.966235 systemd[1]: Reloading finished in 585 ms. Mar 7 00:54:10.005419 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:54:10.011002 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:54:10.028445 systemd[1]: Starting ensure-sysext.service... Mar 7 00:54:10.043467 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:54:10.074622 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:54:10.074663 systemd[1]: Reloading... Mar 7 00:54:10.117825 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:54:10.121002 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:54:10.123835 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:54:10.124754 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Mar 7 00:54:10.125164 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Mar 7 00:54:10.134545 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:54:10.134576 systemd-tmpfiles[1722]: Skipping /boot Mar 7 00:54:10.163886 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:54:10.163929 systemd-tmpfiles[1722]: Skipping /boot Mar 7 00:54:10.259139 zram_generator::config[1752]: No configuration found. Mar 7 00:54:10.494622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:10.614574 systemd[1]: Reloading finished in 539 ms. Mar 7 00:54:10.648848 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:54:10.663330 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:54:10.686561 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:10.694512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:54:10.699892 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:54:10.718614 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:54:10.724279 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:54:10.734444 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:54:10.747015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:10.770857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:54:10.779697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:54:10.784945 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:54:10.794500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:10.803747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:54:10.812415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:10.812893 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:10.822020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:10.829766 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:54:10.832509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:10.833021 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:54:10.840133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:54:10.840563 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:54:10.853523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:54:10.873297 systemd[1]: Finished ensure-sysext.service. Mar 7 00:54:10.876386 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:54:10.876751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:54:10.903323 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:54:10.930049 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:54:10.937963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:54:10.938389 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:54:10.941917 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:54:10.962006 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:54:10.963633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:54:10.966952 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:54:10.988980 augenrules[1838]: No rules Mar 7 00:54:10.997017 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:11.015251 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:54:11.045501 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:54:11.053986 systemd-udevd[1809]: Using default interface naming scheme 'v255'. Mar 7 00:54:11.114463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:54:11.120601 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:54:11.141383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:54:11.143880 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:54:11.293457 systemd-resolved[1807]: Positive Trust Anchors: Mar 7 00:54:11.293506 systemd-resolved[1807]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:54:11.293572 systemd-resolved[1807]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:54:11.322965 systemd-resolved[1807]: Defaulting to hostname 'linux'. Mar 7 00:54:11.329632 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:54:11.334471 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:54:11.347023 systemd-networkd[1855]: lo: Link UP Mar 7 00:54:11.347605 systemd-networkd[1855]: lo: Gained carrier Mar 7 00:54:11.349437 systemd-networkd[1855]: Enumeration completed Mar 7 00:54:11.349655 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:54:11.353123 systemd[1]: Reached target network.target - Network. Mar 7 00:54:11.372602 (udev-worker)[1857]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:54:11.387597 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:54:11.393515 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 00:54:11.476907 systemd-networkd[1855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:11.477167 systemd-networkd[1855]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:54:11.482933 systemd-networkd[1855]: eth0: Link UP Mar 7 00:54:11.483597 systemd-networkd[1855]: eth0: Gained carrier Mar 7 00:54:11.485239 systemd-networkd[1855]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:11.505314 systemd-networkd[1855]: eth0: DHCPv4 address 172.31.26.221/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 00:54:11.578206 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1868) Mar 7 00:54:11.783505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:11.865463 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 00:54:11.870222 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 00:54:11.885529 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 00:54:11.891662 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:54:11.920132 lvm[1971]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:54:11.941756 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:54:11.961392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:11.970002 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 00:54:11.973558 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:54:11.976272 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:54:11.978996 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:54:11.982311 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:54:11.985629 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:54:11.988357 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:54:11.991330 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:54:11.994259 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:54:11.994318 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:54:11.996502 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:54:12.000235 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:54:12.005929 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:54:12.015290 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:54:12.025456 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 00:54:12.029358 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:54:12.032214 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:54:12.034623 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:54:12.036968 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:54:12.037022 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:54:12.044440 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:54:12.053471 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:54:12.058318 lvm[1981]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:54:12.060449 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:54:12.068489 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:54:12.076437 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:54:12.078887 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:54:12.084336 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:54:12.091526 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 00:54:12.100384 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:54:12.108368 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 00:54:12.118431 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:54:12.126557 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:54:12.138753 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:54:12.144876 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:54:12.146877 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:54:12.156586 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:54:12.168225 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:54:12.179750 dbus-daemon[1984]: [system] SELinux support is enabled Mar 7 00:54:12.180683 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:54:12.191078 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:54:12.192059 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:54:12.195218 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:54:12.195281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:54:12.208438 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1855 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 00:54:12.221513 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 00:54:12.241140 jq[1985]: false Mar 7 00:54:12.253540 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:54:12.256734 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:54:12.278151 extend-filesystems[1986]: Found loop4 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found loop5 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found loop6 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found loop7 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found nvme0n1 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found nvme0n1p1 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found nvme0n1p2 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found nvme0n1p3 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found usr Mar 7 00:54:12.278151 extend-filesystems[1986]: Found nvme0n1p4 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found nvme0n1p6 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found nvme0n1p7 Mar 7 00:54:12.278151 extend-filesystems[1986]: Found nvme0n1p9 Mar 7 00:54:12.278151 extend-filesystems[1986]: Checking size of /dev/nvme0n1p9 Mar 7 00:54:12.405320 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:14:43 UTC 2026 (1): Starting Mar 7 00:54:12.405320 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 00:54:12.405320 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: ---------------------------------------------------- Mar 7 00:54:12.405320 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Mar 7 00:54:12.405320 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 00:54:12.405320 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: corporation. Support and training for ntp-4 are Mar 7 00:54:12.405320 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: available at https://www.nwtime.org/support Mar 7 00:54:12.405320 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: ---------------------------------------------------- Mar 7 00:54:12.353111 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 00:54:12.413950 extend-filesystems[1986]: Resized partition /dev/nvme0n1p9 Mar 7 00:54:12.438322 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 7 00:54:12.400566 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:14:43 UTC 2026 (1): Starting Mar 7 00:54:12.377380 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:54:12.442843 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: proto: precision = 0.096 usec (-23) Mar 7 00:54:12.442905 extend-filesystems[2019]: resize2fs 1.47.1 (20-May-2024) Mar 7 00:54:12.400622 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 00:54:12.380867 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:54:12.464913 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: basedate set to 2026-02-22 Mar 7 00:54:12.464913 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: gps base set to 2026-02-22 (week 2407) Mar 7 00:54:12.400644 ntpd[1988]: ---------------------------------------------------- Mar 7 00:54:12.400663 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Mar 7 00:54:12.400683 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 00:54:12.465799 jq[1995]: true Mar 7 00:54:12.400702 ntpd[1988]: corporation. Support and training for ntp-4 are Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: Listen normally on 3 eth0 172.31.26.221:123 Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: Listen normally on 4 lo [::1]:123 Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: bind(21) AF_INET6 fe80::492:dbff:fefd:5d99%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: unable to create socket on eth0 (5) for fe80::492:dbff:fefd:5d99%2#123 Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: failed to init interface for address fe80::492:dbff:fefd:5d99%2 Mar 7 00:54:12.480933 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Mar 7 00:54:12.400720 ntpd[1988]: available at https://www.nwtime.org/support Mar 7 00:54:12.481560 tar[2006]: linux-arm64/LICENSE Mar 7 00:54:12.481560 tar[2006]: linux-arm64/helm Mar 7 00:54:12.400740 ntpd[1988]: ---------------------------------------------------- Mar 7 00:54:12.484853 (ntainerd)[2021]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:54:12.434366 ntpd[1988]: proto: precision = 0.096 usec (-23) Mar 7 00:54:12.443235 ntpd[1988]: basedate set to 2026-02-22 Mar 7 00:54:12.443273 ntpd[1988]: gps base set to 2026-02-22 (week 2407) Mar 7 00:54:12.470121 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 00:54:12.470210 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 00:54:12.470481 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 00:54:12.470550 ntpd[1988]: Listen normally on 3 eth0 172.31.26.221:123 Mar 7 00:54:12.470624 ntpd[1988]: Listen normally on 4 lo [::1]:123 Mar 7 00:54:12.470701 ntpd[1988]: bind(21) AF_INET6 fe80::492:dbff:fefd:5d99%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 00:54:12.470744 ntpd[1988]: unable to create socket on eth0 (5) for fe80::492:dbff:fefd:5d99%2#123 Mar 7 00:54:12.470773 ntpd[1988]: failed to init interface for address fe80::492:dbff:fefd:5d99%2 Mar 7 00:54:12.470826 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Mar 7 00:54:12.555207 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:12.566209 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:12.566209 ntpd[1988]: 7 Mar 00:54:12 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:12.555263 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:12.584801 update_engine[1994]: I20260307 00:54:12.576745 1994 main.cc:92] Flatcar Update Engine starting Mar 7 00:54:12.590372 jq[2023]: true Mar 7 00:54:12.592663 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:54:12.593065 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:54:12.603291 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 00:54:12.610944 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:54:12.618662 update_engine[1994]: I20260307 00:54:12.616420 1994 update_check_scheduler.cc:74] Next update check in 7m31s Mar 7 00:54:12.622436 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:54:12.656116 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 7 00:54:12.683373 systemd-networkd[1855]: eth0: Gained IPv6LL Mar 7 00:54:12.690783 extend-filesystems[2019]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 7 00:54:12.690783 extend-filesystems[2019]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 00:54:12.690783 extend-filesystems[2019]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 7 00:54:12.707997 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1868) Mar 7 00:54:12.708152 extend-filesystems[1986]: Resized filesystem in /dev/nvme0n1p9 Mar 7 00:54:12.695337 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.694 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.705 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.712 INFO Fetch successful Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.712 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.720 INFO Fetch successful Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.720 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.729 INFO Fetch successful Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.729 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.731 INFO Fetch successful Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.743 INFO Fetch failed with 404: resource not found Mar 7 00:54:12.752383 coreos-metadata[1983]: Mar 07 00:54:12.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 7 00:54:12.696061 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:54:12.716664 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:54:12.726021 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:54:12.731578 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 7 00:54:12.737547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:12.743430 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:54:12.771121 coreos-metadata[1983]: Mar 07 00:54:12.763 INFO Fetch successful Mar 7 00:54:12.771121 coreos-metadata[1983]: Mar 07 00:54:12.763 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 7 00:54:12.776119 coreos-metadata[1983]: Mar 07 00:54:12.771 INFO Fetch successful Mar 7 00:54:12.776119 coreos-metadata[1983]: Mar 07 00:54:12.771 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 7 00:54:12.776528 coreos-metadata[1983]: Mar 07 00:54:12.776 INFO Fetch successful Mar 7 00:54:12.776528 coreos-metadata[1983]: Mar 07 00:54:12.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 7 00:54:12.783783 coreos-metadata[1983]: Mar 07 00:54:12.783 INFO Fetch successful Mar 7 00:54:12.783783 coreos-metadata[1983]: Mar 07 00:54:12.783 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 7 00:54:12.788130 coreos-metadata[1983]: Mar 07 00:54:12.786 INFO Fetch successful Mar 7 00:54:12.914708 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:54:12.917728 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:54:12.969705 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:54:12.990401 bash[2107]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:54:13.004216 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:54:13.012504 systemd[1]: Starting sshkeys.service... Mar 7 00:54:13.062678 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:54:13.086776 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Mar 7 00:54:13.086847 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 7 00:54:13.087411 systemd-logind[1993]: New seat seat0. Mar 7 00:54:13.092452 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:54:13.139837 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 00:54:13.224414 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 00:54:13.305928 amazon-ssm-agent[2057]: Initializing new seelog logger Mar 7 00:54:13.305928 amazon-ssm-agent[2057]: New Seelog Logger Creation Complete Mar 7 00:54:13.305928 amazon-ssm-agent[2057]: 2026/03/07 00:54:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:13.305928 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:13.305928 amazon-ssm-agent[2057]: 2026/03/07 00:54:13 processing appconfig overrides Mar 7 00:54:13.306647 amazon-ssm-agent[2057]: 2026/03/07 00:54:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:13.306647 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:13.306647 amazon-ssm-agent[2057]: 2026/03/07 00:54:13 processing appconfig overrides Mar 7 00:54:13.306789 amazon-ssm-agent[2057]: 2026/03/07 00:54:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:13.306789 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:13.306871 amazon-ssm-agent[2057]: 2026/03/07 00:54:13 processing appconfig overrides Mar 7 00:54:13.320503 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO Proxy environment variables: Mar 7 00:54:13.330114 amazon-ssm-agent[2057]: 2026/03/07 00:54:13 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:13.330114 amazon-ssm-agent[2057]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:13.330114 amazon-ssm-agent[2057]: 2026/03/07 00:54:13 processing appconfig overrides Mar 7 00:54:13.379177 containerd[2021]: time="2026-03-07T00:54:13.376596838Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 00:54:13.415617 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO no_proxy: Mar 7 00:54:13.511687 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 00:54:13.512678 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 00:54:13.515892 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO https_proxy: Mar 7 00:54:13.519767 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2000 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 00:54:13.573373 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 00:54:13.618150 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO http_proxy: Mar 7 00:54:13.690011 polkitd[2173]: Started polkitd version 121 Mar 7 00:54:13.729540 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO Checking if agent identity type OnPrem can be assumed Mar 7 00:54:13.775853 coreos-metadata[2135]: Mar 07 00:54:13.773 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 00:54:13.780353 coreos-metadata[2135]: Mar 07 00:54:13.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 7 00:54:13.784709 coreos-metadata[2135]: Mar 07 00:54:13.781 INFO Fetch successful Mar 7 00:54:13.784709 coreos-metadata[2135]: Mar 07 00:54:13.781 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 00:54:13.783037 polkitd[2173]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 00:54:13.782358 locksmithd[2046]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:54:13.800141 coreos-metadata[2135]: Mar 07 00:54:13.795 INFO Fetch successful Mar 7 00:54:13.799454 polkitd[2173]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 00:54:13.806307 containerd[2021]: time="2026-03-07T00:54:13.803194224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:13.806255 unknown[2135]: wrote ssh authorized keys file for user: core Mar 7 00:54:13.806975 polkitd[2173]: Finished loading, compiling and executing 2 rules Mar 7 00:54:13.818792 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 00:54:13.819133 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 00:54:13.833224 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO Checking if agent identity type EC2 can be assumed Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.831665245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.831742909Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.831781225Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.832208557Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.832276561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.832439461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.832475173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.832901497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.832956181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.832991785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:13.833355 containerd[2021]: time="2026-03-07T00:54:13.833019049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:13.826435 polkitd[2173]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 00:54:13.841435 containerd[2021]: time="2026-03-07T00:54:13.839778205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:13.841435 containerd[2021]: time="2026-03-07T00:54:13.840497017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:13.848131 containerd[2021]: time="2026-03-07T00:54:13.845306317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:13.852509 containerd[2021]: time="2026-03-07T00:54:13.851806321Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 00:54:13.852509 containerd[2021]: time="2026-03-07T00:54:13.852255469Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 00:54:13.852509 containerd[2021]: time="2026-03-07T00:54:13.852415189Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:54:13.876262 containerd[2021]: time="2026-03-07T00:54:13.875909989Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 00:54:13.876262 containerd[2021]: time="2026-03-07T00:54:13.876033385Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 00:54:13.877483 containerd[2021]: time="2026-03-07T00:54:13.876074857Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 00:54:13.877483 containerd[2021]: time="2026-03-07T00:54:13.876661933Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 00:54:13.877483 containerd[2021]: time="2026-03-07T00:54:13.876718921Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 00:54:13.877483 containerd[2021]: time="2026-03-07T00:54:13.877062769Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 00:54:13.881492 containerd[2021]: time="2026-03-07T00:54:13.880571485Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887477173Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887550097Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887584957Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887663413Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887703325Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887741449Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887778673Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887818417Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887856085Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887888809Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887918497Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.887964253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.888000577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.889609 containerd[2021]: time="2026-03-07T00:54:13.888030781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888078841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888155785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888194317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888227257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888262777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888307777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888348289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888380869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888412117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888443113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888496165Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888547513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888592789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.890411 containerd[2021]: time="2026-03-07T00:54:13.888622513Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 00:54:13.909134 containerd[2021]: time="2026-03-07T00:54:13.902735161Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 00:54:13.909134 containerd[2021]: time="2026-03-07T00:54:13.902852797Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 00:54:13.909134 containerd[2021]: time="2026-03-07T00:54:13.902887789Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 00:54:13.909134 containerd[2021]: time="2026-03-07T00:54:13.902919961Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 00:54:13.909134 containerd[2021]: time="2026-03-07T00:54:13.902946073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.909134 containerd[2021]: time="2026-03-07T00:54:13.902981197Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 00:54:13.909134 containerd[2021]: time="2026-03-07T00:54:13.903008233Z" level=info msg="NRI interface is disabled by configuration." Mar 7 00:54:13.909134 containerd[2021]: time="2026-03-07T00:54:13.903042061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 00:54:13.909657 containerd[2021]: time="2026-03-07T00:54:13.903636997Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 00:54:13.909657 containerd[2021]: time="2026-03-07T00:54:13.903781249Z" level=info msg="Connect containerd service" Mar 7 00:54:13.909657 containerd[2021]: time="2026-03-07T00:54:13.903856693Z" level=info msg="using legacy CRI server" Mar 7 00:54:13.909657 containerd[2021]: time="2026-03-07T00:54:13.903876157Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:54:13.924139 update-ssh-keys[2197]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:54:13.918750 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 00:54:13.924933 containerd[2021]: time="2026-03-07T00:54:13.904064161Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 00:54:13.924933 containerd[2021]: time="2026-03-07T00:54:13.918819541Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:54:13.932154 containerd[2021]: time="2026-03-07T00:54:13.929653897Z" level=info msg="Start subscribing containerd event" Mar 7 00:54:13.932154 containerd[2021]: time="2026-03-07T00:54:13.929758381Z" level=info msg="Start recovering state" Mar 7 00:54:13.932154 containerd[2021]: time="2026-03-07T00:54:13.929928997Z" level=info msg="Start event monitor" Mar 7 00:54:13.932154 containerd[2021]: time="2026-03-07T00:54:13.929958205Z" level=info msg="Start snapshots syncer" Mar 7 00:54:13.932154 containerd[2021]: time="2026-03-07T00:54:13.929990245Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:54:13.932154 containerd[2021]: time="2026-03-07T00:54:13.930029809Z" level=info msg="Start streaming server" Mar 7 00:54:13.940479 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO Agent will take identity from EC2 Mar 7 00:54:13.932728 systemd[1]: Finished sshkeys.service. Mar 7 00:54:13.949418 containerd[2021]: time="2026-03-07T00:54:13.947541601Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:54:13.949418 containerd[2021]: time="2026-03-07T00:54:13.947681389Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:54:13.949418 containerd[2021]: time="2026-03-07T00:54:13.948517909Z" level=info msg="containerd successfully booted in 0.587361s" Mar 7 00:54:13.949300 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:54:13.971452 systemd-hostnamed[2000]: Hostname set to (transient) Mar 7 00:54:13.971673 systemd-resolved[1807]: System hostname changed to 'ip-172-31-26-221'. Mar 7 00:54:14.035127 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:14.134218 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:14.231298 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:14.331041 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 7 00:54:14.376181 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 7 00:54:14.377457 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [amazon-ssm-agent] Starting Core Agent Mar 7 00:54:14.377910 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 7 00:54:14.378034 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [Registrar] Starting registrar module Mar 7 00:54:14.378911 amazon-ssm-agent[2057]: 2026-03-07 00:54:13 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 7 00:54:14.378911 amazon-ssm-agent[2057]: 2026-03-07 00:54:14 INFO [EC2Identity] EC2 registration was successful. Mar 7 00:54:14.378911 amazon-ssm-agent[2057]: 2026-03-07 00:54:14 INFO [CredentialRefresher] credentialRefresher has started Mar 7 00:54:14.378911 amazon-ssm-agent[2057]: 2026-03-07 00:54:14 INFO [CredentialRefresher] Starting credentials refresher loop Mar 7 00:54:14.378911 amazon-ssm-agent[2057]: 2026-03-07 00:54:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 7 00:54:14.431959 amazon-ssm-agent[2057]: 2026-03-07 00:54:14 INFO [CredentialRefresher] Next credential rotation will be in 32.366608493966666 minutes Mar 7 00:54:14.914695 tar[2006]: linux-arm64/README.md Mar 7 00:54:14.939285 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:54:15.231509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:15.248174 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:54:15.337821 sshd_keygen[2017]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:54:15.378121 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:54:15.392716 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:54:15.401732 ntpd[1988]: Listen normally on 6 eth0 [fe80::492:dbff:fefd:5d99%2]:123 Mar 7 00:54:15.402345 ntpd[1988]: 7 Mar 00:54:15 ntpd[1988]: Listen normally on 6 eth0 [fe80::492:dbff:fefd:5d99%2]:123 Mar 7 00:54:15.410214 systemd[1]: Started sshd@0-172.31.26.221:22-20.161.92.111:45994.service - OpenSSH per-connection server daemon (20.161.92.111:45994). Mar 7 00:54:15.444262 amazon-ssm-agent[2057]: 2026-03-07 00:54:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 7 00:54:15.445251 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:54:15.447617 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:54:15.465713 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:54:15.515285 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:54:15.530651 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:54:15.541806 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 00:54:15.544897 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:54:15.547400 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:54:15.549903 amazon-ssm-agent[2057]: 2026-03-07 00:54:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2232) started Mar 7 00:54:15.550031 systemd[1]: Startup finished in 1.177s (kernel) + 8.951s (initrd) + 8.832s (userspace) = 18.961s. Mar 7 00:54:15.650240 amazon-ssm-agent[2057]: 2026-03-07 00:54:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 7 00:54:15.978464 sshd[2229]: Accepted publickey for core from 20.161.92.111 port 45994 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:15.981672 sshd[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:16.007222 systemd-logind[1993]: New session 1 of user core. Mar 7 00:54:16.011222 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:54:16.022865 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:54:16.056196 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:54:16.068838 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:54:16.090906 (systemd)[2258]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:54:16.349418 systemd[2258]: Queued start job for default target default.target. Mar 7 00:54:16.353468 kubelet[2217]: E0307 00:54:16.353367 2217 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:54:16.359006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:54:16.361215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:54:16.362192 systemd[1]: kubelet.service: Consumed 1.451s CPU time. Mar 7 00:54:16.362446 systemd[2258]: Created slice app.slice - User Application Slice. Mar 7 00:54:16.362684 systemd[2258]: Reached target paths.target - Paths. Mar 7 00:54:16.362916 systemd[2258]: Reached target timers.target - Timers. Mar 7 00:54:16.366690 systemd[2258]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:54:16.402303 systemd[2258]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:54:16.402619 systemd[2258]: Reached target sockets.target - Sockets. Mar 7 00:54:16.402658 systemd[2258]: Reached target basic.target - Basic System. Mar 7 00:54:16.402777 systemd[2258]: Reached target default.target - Main User Target. Mar 7 00:54:16.402853 systemd[2258]: Startup finished in 298ms. Mar 7 00:54:16.402908 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:54:16.411457 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:54:16.790790 systemd[1]: Started sshd@1-172.31.26.221:22-20.161.92.111:46002.service - OpenSSH per-connection server daemon (20.161.92.111:46002). Mar 7 00:54:17.305434 sshd[2270]: Accepted publickey for core from 20.161.92.111 port 46002 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:17.308428 sshd[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:17.318234 systemd-logind[1993]: New session 2 of user core. Mar 7 00:54:17.329698 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:54:17.670865 sshd[2270]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:17.678412 systemd[1]: sshd@1-172.31.26.221:22-20.161.92.111:46002.service: Deactivated successfully. Mar 7 00:54:17.684184 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 00:54:17.685688 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Mar 7 00:54:17.688953 systemd-logind[1993]: Removed session 2. Mar 7 00:54:17.769653 systemd[1]: Started sshd@2-172.31.26.221:22-20.161.92.111:46004.service - OpenSSH per-connection server daemon (20.161.92.111:46004). Mar 7 00:54:18.282654 sshd[2277]: Accepted publickey for core from 20.161.92.111 port 46004 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:18.285838 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:18.297328 systemd-logind[1993]: New session 3 of user core. Mar 7 00:54:18.304498 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:54:18.637475 sshd[2277]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:18.645444 systemd[1]: sshd@2-172.31.26.221:22-20.161.92.111:46004.service: Deactivated successfully. Mar 7 00:54:18.649036 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 00:54:18.650521 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Mar 7 00:54:18.653023 systemd-logind[1993]: Removed session 3. Mar 7 00:54:18.731643 systemd[1]: Started sshd@3-172.31.26.221:22-20.161.92.111:46020.service - OpenSSH per-connection server daemon (20.161.92.111:46020). Mar 7 00:54:19.242523 sshd[2285]: Accepted publickey for core from 20.161.92.111 port 46020 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:19.245327 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:19.252547 systemd-logind[1993]: New session 4 of user core. Mar 7 00:54:19.267375 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:54:18.952328 systemd-resolved[1807]: Clock change detected. Flushing caches. Mar 7 00:54:18.964639 systemd-journald[1567]: Time jumped backwards, rotating. Mar 7 00:54:19.154765 sshd[2285]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:19.160791 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:54:19.161192 systemd[1]: sshd@3-172.31.26.221:22-20.161.92.111:46020.service: Deactivated successfully. Mar 7 00:54:19.164447 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:54:19.169164 systemd-logind[1993]: Removed session 4. Mar 7 00:54:19.254692 systemd[1]: Started sshd@4-172.31.26.221:22-20.161.92.111:46024.service - OpenSSH per-connection server daemon (20.161.92.111:46024). Mar 7 00:54:19.756365 sshd[2293]: Accepted publickey for core from 20.161.92.111 port 46024 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:19.758936 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:19.768540 systemd-logind[1993]: New session 5 of user core. Mar 7 00:54:19.774516 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:54:20.056945 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:54:20.057711 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:20.077035 sudo[2296]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:20.156461 sshd[2293]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:20.163394 systemd[1]: sshd@4-172.31.26.221:22-20.161.92.111:46024.service: Deactivated successfully. Mar 7 00:54:20.167194 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:54:20.169091 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:54:20.172431 systemd-logind[1993]: Removed session 5. Mar 7 00:54:20.253754 systemd[1]: Started sshd@5-172.31.26.221:22-20.161.92.111:58686.service - OpenSSH per-connection server daemon (20.161.92.111:58686). Mar 7 00:54:20.770186 sshd[2301]: Accepted publickey for core from 20.161.92.111 port 58686 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:20.773137 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:20.781652 systemd-logind[1993]: New session 6 of user core. Mar 7 00:54:20.793003 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:54:21.053530 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:54:21.054695 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:21.061503 sudo[2305]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:21.072695 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 00:54:21.073397 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:21.099794 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:21.105754 auditctl[2308]: No rules Mar 7 00:54:21.106907 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:54:21.107393 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:21.117024 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:21.175171 augenrules[2326]: No rules Mar 7 00:54:21.179371 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:21.181829 sudo[2304]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:21.260854 sshd[2301]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:21.267618 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:54:21.268779 systemd[1]: sshd@5-172.31.26.221:22-20.161.92.111:58686.service: Deactivated successfully. Mar 7 00:54:21.272364 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:54:21.274342 systemd-logind[1993]: Removed session 6. Mar 7 00:54:21.354764 systemd[1]: Started sshd@6-172.31.26.221:22-20.161.92.111:58698.service - OpenSSH per-connection server daemon (20.161.92.111:58698). Mar 7 00:54:21.866273 sshd[2334]: Accepted publickey for core from 20.161.92.111 port 58698 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:21.868330 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:21.875769 systemd-logind[1993]: New session 7 of user core. Mar 7 00:54:21.886586 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:54:22.148393 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:54:22.149635 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:22.657821 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:54:22.658016 (dockerd)[2352]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:54:23.083341 dockerd[2352]: time="2026-03-07T00:54:23.082865604Z" level=info msg="Starting up" Mar 7 00:54:23.212776 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport627308386-merged.mount: Deactivated successfully. Mar 7 00:54:23.238451 dockerd[2352]: time="2026-03-07T00:54:23.238073161Z" level=info msg="Loading containers: start." Mar 7 00:54:23.437295 kernel: Initializing XFRM netlink socket Mar 7 00:54:23.479949 (udev-worker)[2375]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:54:23.587630 systemd-networkd[1855]: docker0: Link UP Mar 7 00:54:23.619635 dockerd[2352]: time="2026-03-07T00:54:23.618195783Z" level=info msg="Loading containers: done." Mar 7 00:54:23.645898 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3382952151-merged.mount: Deactivated successfully. Mar 7 00:54:23.662357 dockerd[2352]: time="2026-03-07T00:54:23.662204175Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:54:23.663050 dockerd[2352]: time="2026-03-07T00:54:23.662741163Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 00:54:23.663612 dockerd[2352]: time="2026-03-07T00:54:23.663373551Z" level=info msg="Daemon has completed initialization" Mar 7 00:54:23.739200 dockerd[2352]: time="2026-03-07T00:54:23.738749931Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:54:23.739941 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:54:24.599516 containerd[2021]: time="2026-03-07T00:54:24.599384188Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 00:54:25.257724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1832514278.mount: Deactivated successfully. Mar 7 00:54:26.161459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:54:26.173771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:26.618531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:26.631436 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:54:26.724247 kubelet[2558]: E0307 00:54:26.722651 2558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:54:26.731678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:54:26.732057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:54:27.130106 containerd[2021]: time="2026-03-07T00:54:27.130016224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:27.135304 containerd[2021]: time="2026-03-07T00:54:27.133301716Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 7 00:54:27.141381 containerd[2021]: time="2026-03-07T00:54:27.141305032Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:27.147971 containerd[2021]: time="2026-03-07T00:54:27.147885424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:27.150536 containerd[2021]: time="2026-03-07T00:54:27.150469228Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.55089376s" Mar 7 00:54:27.150662 containerd[2021]: time="2026-03-07T00:54:27.150535576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 7 00:54:27.151667 containerd[2021]: time="2026-03-07T00:54:27.151484020Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 00:54:28.932279 containerd[2021]: time="2026-03-07T00:54:28.930841089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.933962 containerd[2021]: time="2026-03-07T00:54:28.933895533Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 7 00:54:28.935685 containerd[2021]: time="2026-03-07T00:54:28.935624265Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.943364 containerd[2021]: time="2026-03-07T00:54:28.943279161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.947130 containerd[2021]: time="2026-03-07T00:54:28.947058969Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.795509597s" Mar 7 00:54:28.947397 containerd[2021]: time="2026-03-07T00:54:28.947356917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 7 00:54:28.948597 containerd[2021]: time="2026-03-07T00:54:28.948526317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 00:54:30.611532 containerd[2021]: time="2026-03-07T00:54:30.611435469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.613989 containerd[2021]: time="2026-03-07T00:54:30.613911633Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 7 00:54:30.616293 containerd[2021]: time="2026-03-07T00:54:30.616153881Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.622452 containerd[2021]: time="2026-03-07T00:54:30.622351413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.628306 containerd[2021]: time="2026-03-07T00:54:30.626449929Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.67779934s" Mar 7 00:54:30.628306 containerd[2021]: time="2026-03-07T00:54:30.626553249Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 7 00:54:30.628306 containerd[2021]: time="2026-03-07T00:54:30.627453813Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 00:54:31.995393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202204485.mount: Deactivated successfully. Mar 7 00:54:32.585625 containerd[2021]: time="2026-03-07T00:54:32.585557807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:32.592322 containerd[2021]: time="2026-03-07T00:54:32.592257827Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 7 00:54:32.595265 containerd[2021]: time="2026-03-07T00:54:32.595096847Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:32.602265 containerd[2021]: time="2026-03-07T00:54:32.602018039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:32.607255 containerd[2021]: time="2026-03-07T00:54:32.607171535Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.979630722s" Mar 7 00:54:32.607430 containerd[2021]: time="2026-03-07T00:54:32.607398347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 7 00:54:32.608209 containerd[2021]: time="2026-03-07T00:54:32.608157755Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 00:54:33.118699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923558667.mount: Deactivated successfully. Mar 7 00:54:34.322524 containerd[2021]: time="2026-03-07T00:54:34.322458096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:34.325827 containerd[2021]: time="2026-03-07T00:54:34.325747776Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 7 00:54:34.327368 containerd[2021]: time="2026-03-07T00:54:34.327288492Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:34.332731 containerd[2021]: time="2026-03-07T00:54:34.332643648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:34.335513 containerd[2021]: time="2026-03-07T00:54:34.335263404Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.727042829s" Mar 7 00:54:34.335513 containerd[2021]: time="2026-03-07T00:54:34.335328972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 7 00:54:34.336312 containerd[2021]: time="2026-03-07T00:54:34.336211836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 00:54:34.838359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966881249.mount: Deactivated successfully. Mar 7 00:54:34.858268 containerd[2021]: time="2026-03-07T00:54:34.857916291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:34.859605 containerd[2021]: time="2026-03-07T00:54:34.859550895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 7 00:54:34.861662 containerd[2021]: time="2026-03-07T00:54:34.860945967Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:34.865515 containerd[2021]: time="2026-03-07T00:54:34.865449399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:34.867447 containerd[2021]: time="2026-03-07T00:54:34.867388155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 530.921511ms" Mar 7 00:54:34.867532 containerd[2021]: time="2026-03-07T00:54:34.867444123Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 7 00:54:34.868742 containerd[2021]: time="2026-03-07T00:54:34.868685379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 00:54:35.411096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110674838.mount: Deactivated successfully. Mar 7 00:54:36.982461 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:54:36.989627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:38.122267 containerd[2021]: time="2026-03-07T00:54:38.120654555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.145713 containerd[2021]: time="2026-03-07T00:54:38.145640583Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 7 00:54:38.189800 containerd[2021]: time="2026-03-07T00:54:38.189708339Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.211622 containerd[2021]: time="2026-03-07T00:54:38.211529163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.216793 containerd[2021]: time="2026-03-07T00:54:38.216669123Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 3.3479218s" Mar 7 00:54:38.216793 containerd[2021]: time="2026-03-07T00:54:38.216743091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 7 00:54:38.413705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:38.428828 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:54:38.531175 kubelet[2717]: E0307 00:54:38.531080 2717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:54:38.536183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:54:38.536579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:54:43.558449 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 00:54:45.949851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:45.960778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:46.020827 systemd[1]: Reloading requested from client PID 2748 ('systemctl') (unit session-7.scope)... Mar 7 00:54:46.021174 systemd[1]: Reloading... Mar 7 00:54:46.273290 zram_generator::config[2794]: No configuration found. Mar 7 00:54:46.516078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:46.692172 systemd[1]: Reloading finished in 670 ms. Mar 7 00:54:46.766889 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 00:54:46.767084 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 00:54:46.769309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:46.785800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:47.107562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:47.129269 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:54:47.204007 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:47.204007 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:54:47.204007 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:47.204007 kubelet[2849]: I0307 00:54:47.203638 2849 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:54:49.712493 kubelet[2849]: I0307 00:54:49.712421 2849 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:54:49.712493 kubelet[2849]: I0307 00:54:49.712474 2849 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:54:49.713257 kubelet[2849]: I0307 00:54:49.712887 2849 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:54:49.754527 kubelet[2849]: E0307 00:54:49.754459 2849 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.221:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:54:49.754962 kubelet[2849]: I0307 00:54:49.754642 2849 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:54:49.775344 kubelet[2849]: E0307 00:54:49.774746 2849 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:54:49.775344 kubelet[2849]: I0307 00:54:49.774837 2849 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:54:49.781733 kubelet[2849]: I0307 00:54:49.781667 2849 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:54:49.782279 kubelet[2849]: I0307 00:54:49.782189 2849 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:54:49.782535 kubelet[2849]: I0307 00:54:49.782258 2849 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-221","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:54:49.782727 kubelet[2849]: I0307 00:54:49.782536 2849 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:54:49.782727 kubelet[2849]: I0307 00:54:49.782556 2849 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:54:49.782944 kubelet[2849]: I0307 00:54:49.782915 2849 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:49.788967 kubelet[2849]: I0307 00:54:49.788897 2849 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:54:49.788967 kubelet[2849]: I0307 00:54:49.788949 2849 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:54:49.790315 kubelet[2849]: I0307 00:54:49.789000 2849 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:54:49.790315 kubelet[2849]: I0307 00:54:49.789034 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:54:49.804279 kubelet[2849]: E0307 00:54:49.804179 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:54:49.804467 kubelet[2849]: I0307 00:54:49.804427 2849 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:54:49.805583 kubelet[2849]: I0307 00:54:49.805530 2849 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:54:49.805831 kubelet[2849]: W0307 00:54:49.805795 2849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:54:49.808192 kubelet[2849]: E0307 00:54:49.808143 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-221&limit=500&resourceVersion=0\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:54:49.811964 kubelet[2849]: I0307 00:54:49.811929 2849 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:54:49.812175 kubelet[2849]: I0307 00:54:49.812158 2849 server.go:1289] "Started kubelet" Mar 7 00:54:49.815037 kubelet[2849]: I0307 00:54:49.814765 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:54:49.820151 kubelet[2849]: E0307 00:54:49.816838 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.221:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.221:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-221.189a6908cb1e1099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-221,UID:ip-172-31-26-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-221,},FirstTimestamp:2026-03-07 00:54:49.812111513 +0000 UTC m=+2.674711407,LastTimestamp:2026-03-07 00:54:49.812111513 +0000 UTC m=+2.674711407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-221,}" Mar 7 00:54:49.824330 kubelet[2849]: I0307 00:54:49.824271 2849 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:54:49.826055 kubelet[2849]: I0307 00:54:49.826015 2849 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:54:49.826630 kubelet[2849]: I0307 00:54:49.826561 2849 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:54:49.829369 kubelet[2849]: I0307 00:54:49.829313 2849 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:54:49.829751 kubelet[2849]: E0307 00:54:49.829698 2849 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-221\" not found" Mar 7 00:54:49.833266 kubelet[2849]: I0307 00:54:49.833106 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:54:49.833539 kubelet[2849]: I0307 00:54:49.833503 2849 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:54:49.833884 kubelet[2849]: I0307 00:54:49.833846 2849 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:54:49.835372 kubelet[2849]: I0307 00:54:49.834657 2849 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:54:49.835372 kubelet[2849]: I0307 00:54:49.834769 2849 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:54:49.836549 kubelet[2849]: E0307 00:54:49.836496 2849 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:54:49.840705 kubelet[2849]: I0307 00:54:49.840100 2849 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:54:49.840705 kubelet[2849]: I0307 00:54:49.840266 2849 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:54:49.840705 kubelet[2849]: E0307 00:54:49.840658 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:54:49.844582 kubelet[2849]: E0307 00:54:49.841055 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-221?timeout=10s\": dial tcp 172.31.26.221:6443: connect: connection refused" interval="200ms" Mar 7 00:54:49.845882 kubelet[2849]: I0307 00:54:49.845837 2849 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:54:49.882315 kubelet[2849]: I0307 00:54:49.882274 2849 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:54:49.884405 kubelet[2849]: I0307 00:54:49.882838 2849 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:54:49.884405 kubelet[2849]: I0307 00:54:49.882888 2849 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:54:49.884405 kubelet[2849]: I0307 00:54:49.882903 2849 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:54:49.884405 kubelet[2849]: E0307 00:54:49.882989 2849 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:54:49.886765 kubelet[2849]: E0307 00:54:49.886682 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:54:49.888570 kubelet[2849]: I0307 00:54:49.888496 2849 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:54:49.888570 kubelet[2849]: I0307 00:54:49.888539 2849 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:54:49.888570 kubelet[2849]: I0307 00:54:49.888573 2849 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:49.893934 kubelet[2849]: I0307 00:54:49.893880 2849 policy_none.go:49] "None policy: Start" Mar 7 00:54:49.893934 kubelet[2849]: I0307 00:54:49.893924 2849 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:54:49.894139 kubelet[2849]: I0307 00:54:49.893949 2849 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:54:49.906035 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 00:54:49.923566 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 00:54:49.930807 kubelet[2849]: E0307 00:54:49.930763 2849 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-221\" not found" Mar 7 00:54:49.931743 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 00:54:49.943375 kubelet[2849]: E0307 00:54:49.943332 2849 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:54:49.943831 kubelet[2849]: I0307 00:54:49.943803 2849 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:54:49.944001 kubelet[2849]: I0307 00:54:49.943948 2849 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:54:49.944443 kubelet[2849]: I0307 00:54:49.944405 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:54:49.946835 kubelet[2849]: E0307 00:54:49.946796 2849 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:54:49.947066 kubelet[2849]: E0307 00:54:49.947041 2849 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-221\" not found" Mar 7 00:54:50.006362 systemd[1]: Created slice kubepods-burstable-pod437d0aac0ee7c177d86da4cc0f0a276b.slice - libcontainer container kubepods-burstable-pod437d0aac0ee7c177d86da4cc0f0a276b.slice. Mar 7 00:54:50.025317 kubelet[2849]: E0307 00:54:50.024756 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:50.032752 systemd[1]: Created slice kubepods-burstable-pod6d8db190465e00c8ce672f3c5e1779a7.slice - libcontainer container kubepods-burstable-pod6d8db190465e00c8ce672f3c5e1779a7.slice. Mar 7 00:54:50.035490 kubelet[2849]: I0307 00:54:50.035435 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/437d0aac0ee7c177d86da4cc0f0a276b-ca-certs\") pod \"kube-apiserver-ip-172-31-26-221\" (UID: \"437d0aac0ee7c177d86da4cc0f0a276b\") " pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:54:50.035736 kubelet[2849]: I0307 00:54:50.035710 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/437d0aac0ee7c177d86da4cc0f0a276b-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-221\" (UID: \"437d0aac0ee7c177d86da4cc0f0a276b\") " pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:54:50.035960 kubelet[2849]: I0307 00:54:50.035935 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:54:50.036261 kubelet[2849]: I0307 00:54:50.036162 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:54:50.036432 kubelet[2849]: I0307 00:54:50.036364 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb95b78b959f6b350aa16421bdf655c8-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-221\" (UID: \"cb95b78b959f6b350aa16421bdf655c8\") " pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:54:50.036637 kubelet[2849]: I0307 00:54:50.036584 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/437d0aac0ee7c177d86da4cc0f0a276b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-221\" (UID: \"437d0aac0ee7c177d86da4cc0f0a276b\") " pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:54:50.036809 kubelet[2849]: I0307 00:54:50.036754 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:54:50.037017 kubelet[2849]: I0307 00:54:50.036933 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:54:50.037173 kubelet[2849]: I0307 00:54:50.037125 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:54:50.043966 kubelet[2849]: E0307 00:54:50.043601 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:50.047150 kubelet[2849]: I0307 00:54:50.046071 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-221" Mar 7 00:54:50.047150 kubelet[2849]: E0307 00:54:50.046607 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.221:6443/api/v1/nodes\": dial tcp 172.31.26.221:6443: connect: connection refused" node="ip-172-31-26-221" Mar 7 00:54:50.047529 kubelet[2849]: E0307 00:54:50.047474 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-221?timeout=10s\": dial tcp 172.31.26.221:6443: connect: connection refused" interval="400ms" Mar 7 00:54:50.053899 systemd[1]: Created slice kubepods-burstable-podcb95b78b959f6b350aa16421bdf655c8.slice - libcontainer container kubepods-burstable-podcb95b78b959f6b350aa16421bdf655c8.slice. Mar 7 00:54:50.058723 kubelet[2849]: E0307 00:54:50.058641 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:50.248991 kubelet[2849]: I0307 00:54:50.248929 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-221" Mar 7 00:54:50.249550 kubelet[2849]: E0307 00:54:50.249466 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.221:6443/api/v1/nodes\": dial tcp 172.31.26.221:6443: connect: connection refused" node="ip-172-31-26-221" Mar 7 00:54:50.327334 containerd[2021]: time="2026-03-07T00:54:50.327049971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-221,Uid:437d0aac0ee7c177d86da4cc0f0a276b,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:50.346153 containerd[2021]: time="2026-03-07T00:54:50.345548919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-221,Uid:6d8db190465e00c8ce672f3c5e1779a7,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:50.360395 containerd[2021]: time="2026-03-07T00:54:50.360336748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-221,Uid:cb95b78b959f6b350aa16421bdf655c8,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:50.448307 kubelet[2849]: E0307 00:54:50.448209 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-221?timeout=10s\": dial tcp 172.31.26.221:6443: connect: connection refused" interval="800ms" Mar 7 00:54:50.652106 kubelet[2849]: I0307 00:54:50.651959 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-221" Mar 7 00:54:50.652584 kubelet[2849]: E0307 00:54:50.652531 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.221:6443/api/v1/nodes\": dial tcp 172.31.26.221:6443: connect: connection refused" node="ip-172-31-26-221" Mar 7 00:54:50.758195 kubelet[2849]: E0307 00:54:50.758133 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.221:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:54:50.805365 kubelet[2849]: E0307 00:54:50.805289 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.221:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:54:50.874306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3659376101.mount: Deactivated successfully. Mar 7 00:54:50.888204 containerd[2021]: time="2026-03-07T00:54:50.888145338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:50.895526 containerd[2021]: time="2026-03-07T00:54:50.895433814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 7 00:54:50.897426 containerd[2021]: time="2026-03-07T00:54:50.897295422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:50.900061 containerd[2021]: time="2026-03-07T00:54:50.899936310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:54:50.903345 containerd[2021]: time="2026-03-07T00:54:50.902487570Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:50.906835 containerd[2021]: time="2026-03-07T00:54:50.906738942Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:50.907574 containerd[2021]: time="2026-03-07T00:54:50.906837894Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:54:50.914839 containerd[2021]: time="2026-03-07T00:54:50.914768742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:50.916959 containerd[2021]: time="2026-03-07T00:54:50.916886574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 589.675911ms" Mar 7 00:54:50.921624 containerd[2021]: time="2026-03-07T00:54:50.921356418Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.697075ms" Mar 7 00:54:50.941249 containerd[2021]: time="2026-03-07T00:54:50.941155806Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.705214ms" Mar 7 00:54:51.153523 containerd[2021]: time="2026-03-07T00:54:51.151315743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:51.153523 containerd[2021]: time="2026-03-07T00:54:51.153288195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:51.154216 containerd[2021]: time="2026-03-07T00:54:51.153455883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:51.157275 containerd[2021]: time="2026-03-07T00:54:51.156445959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:51.158637 containerd[2021]: time="2026-03-07T00:54:51.158457795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:51.159905 containerd[2021]: time="2026-03-07T00:54:51.159793839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:51.160017 containerd[2021]: time="2026-03-07T00:54:51.159924183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:51.160187 containerd[2021]: time="2026-03-07T00:54:51.160126035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:51.165931 containerd[2021]: time="2026-03-07T00:54:51.165533932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:51.165931 containerd[2021]: time="2026-03-07T00:54:51.165620812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:51.165931 containerd[2021]: time="2026-03-07T00:54:51.165645760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:51.165931 containerd[2021]: time="2026-03-07T00:54:51.165788104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:51.215055 systemd[1]: Started cri-containerd-880274cc0a37240e9da97bf72874d002beb68792b82cf6c1978e0964574f8f85.scope - libcontainer container 880274cc0a37240e9da97bf72874d002beb68792b82cf6c1978e0964574f8f85. Mar 7 00:54:51.222563 systemd[1]: Started cri-containerd-daf85e0678a32e18c876437567296c084244a46dbc4dcecb53f24a197a92bacd.scope - libcontainer container daf85e0678a32e18c876437567296c084244a46dbc4dcecb53f24a197a92bacd. Mar 7 00:54:51.239557 systemd[1]: Started cri-containerd-b4622f446bf95b4e1db27991eb64990b493ba66f1f5140ac7059723ec1fd56b5.scope - libcontainer container b4622f446bf95b4e1db27991eb64990b493ba66f1f5140ac7059723ec1fd56b5. Mar 7 00:54:51.250860 kubelet[2849]: E0307 00:54:51.250746 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-221?timeout=10s\": dial tcp 172.31.26.221:6443: connect: connection refused" interval="1.6s" Mar 7 00:54:51.326261 kubelet[2849]: E0307 00:54:51.326140 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.221:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-221&limit=500&resourceVersion=0\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:54:51.331254 kubelet[2849]: E0307 00:54:51.330933 2849 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.221:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.221:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:54:51.347362 containerd[2021]: time="2026-03-07T00:54:51.344568844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-221,Uid:437d0aac0ee7c177d86da4cc0f0a276b,Namespace:kube-system,Attempt:0,} returns sandbox id \"880274cc0a37240e9da97bf72874d002beb68792b82cf6c1978e0964574f8f85\"" Mar 7 00:54:51.359841 containerd[2021]: time="2026-03-07T00:54:51.359772208Z" level=info msg="CreateContainer within sandbox \"880274cc0a37240e9da97bf72874d002beb68792b82cf6c1978e0964574f8f85\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:54:51.373588 containerd[2021]: time="2026-03-07T00:54:51.373343609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-221,Uid:cb95b78b959f6b350aa16421bdf655c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"daf85e0678a32e18c876437567296c084244a46dbc4dcecb53f24a197a92bacd\"" Mar 7 00:54:51.377538 containerd[2021]: time="2026-03-07T00:54:51.377461805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-221,Uid:6d8db190465e00c8ce672f3c5e1779a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4622f446bf95b4e1db27991eb64990b493ba66f1f5140ac7059723ec1fd56b5\"" Mar 7 00:54:51.387329 containerd[2021]: time="2026-03-07T00:54:51.387191669Z" level=info msg="CreateContainer within sandbox \"daf85e0678a32e18c876437567296c084244a46dbc4dcecb53f24a197a92bacd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:54:51.391720 containerd[2021]: time="2026-03-07T00:54:51.391654097Z" level=info msg="CreateContainer within sandbox \"b4622f446bf95b4e1db27991eb64990b493ba66f1f5140ac7059723ec1fd56b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:54:51.398196 containerd[2021]: time="2026-03-07T00:54:51.398107577Z" level=info msg="CreateContainer within sandbox \"880274cc0a37240e9da97bf72874d002beb68792b82cf6c1978e0964574f8f85\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4933a9e818a2b36ad92f187a54a84ecec8b84fd5753609b49ecc2aa51daed6af\"" Mar 7 00:54:51.399818 containerd[2021]: time="2026-03-07T00:54:51.399750365Z" level=info msg="StartContainer for \"4933a9e818a2b36ad92f187a54a84ecec8b84fd5753609b49ecc2aa51daed6af\"" Mar 7 00:54:51.433972 containerd[2021]: time="2026-03-07T00:54:51.432553421Z" level=info msg="CreateContainer within sandbox \"daf85e0678a32e18c876437567296c084244a46dbc4dcecb53f24a197a92bacd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29\"" Mar 7 00:54:51.435152 containerd[2021]: time="2026-03-07T00:54:51.435091925Z" level=info msg="StartContainer for \"854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29\"" Mar 7 00:54:51.450649 containerd[2021]: time="2026-03-07T00:54:51.450540725Z" level=info msg="CreateContainer within sandbox \"b4622f446bf95b4e1db27991eb64990b493ba66f1f5140ac7059723ec1fd56b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68\"" Mar 7 00:54:51.453373 containerd[2021]: time="2026-03-07T00:54:51.453167129Z" level=info msg="StartContainer for \"378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68\"" Mar 7 00:54:51.463563 kubelet[2849]: I0307 00:54:51.463046 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-221" Mar 7 00:54:51.463717 kubelet[2849]: E0307 00:54:51.463659 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.221:6443/api/v1/nodes\": dial tcp 172.31.26.221:6443: connect: connection refused" node="ip-172-31-26-221" Mar 7 00:54:51.473755 systemd[1]: Started cri-containerd-4933a9e818a2b36ad92f187a54a84ecec8b84fd5753609b49ecc2aa51daed6af.scope - libcontainer container 4933a9e818a2b36ad92f187a54a84ecec8b84fd5753609b49ecc2aa51daed6af. Mar 7 00:54:51.533067 systemd[1]: Started cri-containerd-854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29.scope - libcontainer container 854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29. Mar 7 00:54:51.563578 systemd[1]: Started cri-containerd-378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68.scope - libcontainer container 378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68. Mar 7 00:54:51.601533 containerd[2021]: time="2026-03-07T00:54:51.601464174Z" level=info msg="StartContainer for \"4933a9e818a2b36ad92f187a54a84ecec8b84fd5753609b49ecc2aa51daed6af\" returns successfully" Mar 7 00:54:51.652907 containerd[2021]: time="2026-03-07T00:54:51.652819626Z" level=info msg="StartContainer for \"854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29\" returns successfully" Mar 7 00:54:51.705832 containerd[2021]: time="2026-03-07T00:54:51.705613614Z" level=info msg="StartContainer for \"378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68\" returns successfully" Mar 7 00:54:51.808193 kubelet[2849]: E0307 00:54:51.808012 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.221:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.221:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-221.189a6908cb1e1099 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-221,UID:ip-172-31-26-221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-221,},FirstTimestamp:2026-03-07 00:54:49.812111513 +0000 UTC m=+2.674711407,LastTimestamp:2026-03-07 00:54:49.812111513 +0000 UTC m=+2.674711407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-221,}" Mar 7 00:54:51.907364 kubelet[2849]: E0307 00:54:51.906807 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:51.912960 kubelet[2849]: E0307 00:54:51.912768 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:51.920172 kubelet[2849]: E0307 00:54:51.919775 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:52.922435 kubelet[2849]: E0307 00:54:52.921913 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:52.922435 kubelet[2849]: E0307 00:54:52.921995 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:52.923067 kubelet[2849]: E0307 00:54:52.922460 2849 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:53.066649 kubelet[2849]: I0307 00:54:53.065625 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-221" Mar 7 00:54:56.650746 kubelet[2849]: E0307 00:54:56.650691 2849 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-221\" not found" node="ip-172-31-26-221" Mar 7 00:54:56.735345 kubelet[2849]: I0307 00:54:56.735202 2849 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-221" Mar 7 00:54:56.798164 kubelet[2849]: I0307 00:54:56.797686 2849 apiserver.go:52] "Watching apiserver" Mar 7 00:54:56.831073 kubelet[2849]: I0307 00:54:56.831007 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:54:56.835070 kubelet[2849]: I0307 00:54:56.834965 2849 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:54:56.843582 kubelet[2849]: E0307 00:54:56.843411 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-221\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:54:56.843582 kubelet[2849]: I0307 00:54:56.843494 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:54:56.848917 kubelet[2849]: E0307 00:54:56.848537 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-221\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:54:56.848917 kubelet[2849]: I0307 00:54:56.848587 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:54:56.855029 kubelet[2849]: E0307 00:54:56.853749 2849 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-221\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:54:57.143405 update_engine[1994]: I20260307 00:54:57.143282 1994 update_attempter.cc:509] Updating boot flags... Mar 7 00:54:57.269274 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3146) Mar 7 00:54:57.731293 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3147) Mar 7 00:54:58.232454 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3147) Mar 7 00:54:59.213883 kubelet[2849]: I0307 00:54:59.213818 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:54:59.417878 kubelet[2849]: I0307 00:54:59.417833 2849 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:54:59.624133 systemd[1]: Reloading requested from client PID 3402 ('systemctl') (unit session-7.scope)... Mar 7 00:54:59.624493 systemd[1]: Reloading... Mar 7 00:54:59.900258 zram_generator::config[3445]: No configuration found. Mar 7 00:55:00.031282 kubelet[2849]: I0307 00:55:00.030798 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-221" podStartSLOduration=1.03077628 podStartE2EDuration="1.03077628s" podCreationTimestamp="2026-03-07 00:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:00.030734796 +0000 UTC m=+12.893334690" watchObservedRunningTime="2026-03-07 00:55:00.03077628 +0000 UTC m=+12.893376162" Mar 7 00:55:00.031282 kubelet[2849]: I0307 00:55:00.030957 2849 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-221" podStartSLOduration=1.030949044 podStartE2EDuration="1.030949044s" podCreationTimestamp="2026-03-07 00:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:00.014582003 +0000 UTC m=+12.877181897" watchObservedRunningTime="2026-03-07 00:55:00.030949044 +0000 UTC m=+12.893548914" Mar 7 00:55:00.221481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:55:00.437830 systemd[1]: Reloading finished in 812 ms. Mar 7 00:55:00.520428 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:00.537247 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:55:00.537989 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:00.538088 systemd[1]: kubelet.service: Consumed 3.536s CPU time, 127.1M memory peak, 0B memory swap peak. Mar 7 00:55:00.550703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:00.913677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:00.937810 (kubelet)[3506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:55:01.057838 kubelet[3506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:55:01.057838 kubelet[3506]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:55:01.057838 kubelet[3506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:55:01.057838 kubelet[3506]: I0307 00:55:01.057505 3506 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:55:01.070286 kubelet[3506]: I0307 00:55:01.069720 3506 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:55:01.070286 kubelet[3506]: I0307 00:55:01.069776 3506 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:55:01.070286 kubelet[3506]: I0307 00:55:01.070177 3506 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:55:01.072627 kubelet[3506]: I0307 00:55:01.072573 3506 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:55:01.074529 sudo[3519]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 00:55:01.076396 sudo[3519]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 00:55:01.087420 kubelet[3506]: I0307 00:55:01.087379 3506 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:55:01.102606 kubelet[3506]: E0307 00:55:01.100971 3506 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:55:01.102606 kubelet[3506]: I0307 00:55:01.101072 3506 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:55:01.107693 kubelet[3506]: I0307 00:55:01.107642 3506 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:55:01.108158 kubelet[3506]: I0307 00:55:01.108098 3506 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:55:01.109967 kubelet[3506]: I0307 00:55:01.108148 3506 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-221","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:55:01.109967 kubelet[3506]: I0307 00:55:01.108444 3506 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:55:01.109967 kubelet[3506]: I0307 00:55:01.108466 3506 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:55:01.109967 kubelet[3506]: I0307 00:55:01.108548 3506 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:01.109967 kubelet[3506]: I0307 00:55:01.108884 3506 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:55:01.110379 kubelet[3506]: I0307 00:55:01.108912 3506 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:55:01.110379 kubelet[3506]: I0307 00:55:01.108960 3506 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:55:01.110379 kubelet[3506]: I0307 00:55:01.108989 3506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:55:01.115808 kubelet[3506]: I0307 00:55:01.115556 3506 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:55:01.118248 kubelet[3506]: I0307 00:55:01.116522 3506 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:55:01.124745 kubelet[3506]: I0307 00:55:01.124684 3506 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:55:01.124894 kubelet[3506]: I0307 00:55:01.124764 3506 server.go:1289] "Started kubelet" Mar 7 00:55:01.140565 kubelet[3506]: I0307 00:55:01.139725 3506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:55:01.151437 kubelet[3506]: I0307 00:55:01.151351 3506 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:55:01.154088 kubelet[3506]: I0307 00:55:01.152832 3506 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:55:01.160272 kubelet[3506]: I0307 00:55:01.159458 3506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:55:01.162251 kubelet[3506]: I0307 00:55:01.160924 3506 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:55:01.196010 kubelet[3506]: I0307 00:55:01.166994 3506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:55:01.209274 kubelet[3506]: I0307 00:55:01.170755 3506 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:55:01.212272 kubelet[3506]: I0307 00:55:01.170846 3506 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:55:01.230982 kubelet[3506]: E0307 00:55:01.177295 3506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-221\" not found" Mar 7 00:55:01.230982 kubelet[3506]: I0307 00:55:01.214674 3506 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:55:01.265721 kubelet[3506]: E0307 00:55:01.263843 3506 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:55:01.265721 kubelet[3506]: I0307 00:55:01.264932 3506 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:55:01.270970 kubelet[3506]: I0307 00:55:01.267208 3506 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:55:01.272389 kubelet[3506]: I0307 00:55:01.272344 3506 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:55:01.305424 kubelet[3506]: I0307 00:55:01.305364 3506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:55:01.308543 kubelet[3506]: I0307 00:55:01.308338 3506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:55:01.309244 kubelet[3506]: I0307 00:55:01.308965 3506 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:55:01.309588 kubelet[3506]: I0307 00:55:01.309381 3506 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:55:01.310246 kubelet[3506]: I0307 00:55:01.310197 3506 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:55:01.311281 kubelet[3506]: E0307 00:55:01.310519 3506 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:55:01.411563 kubelet[3506]: E0307 00:55:01.411512 3506 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 00:55:01.437561 kubelet[3506]: I0307 00:55:01.437506 3506 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:55:01.437980 kubelet[3506]: I0307 00:55:01.437686 3506 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:55:01.437980 kubelet[3506]: I0307 00:55:01.437738 3506 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:01.438774 kubelet[3506]: I0307 00:55:01.438413 3506 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 00:55:01.438774 kubelet[3506]: I0307 00:55:01.438447 3506 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 00:55:01.438774 kubelet[3506]: I0307 00:55:01.438495 3506 policy_none.go:49] "None policy: Start" Mar 7 00:55:01.438774 kubelet[3506]: I0307 00:55:01.438516 3506 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:55:01.438774 kubelet[3506]: I0307 00:55:01.438539 3506 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:55:01.439017 kubelet[3506]: I0307 00:55:01.438806 3506 state_mem.go:75] "Updated machine memory state" Mar 7 00:55:01.464815 kubelet[3506]: E0307 00:55:01.461895 3506 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:55:01.464815 kubelet[3506]: I0307 00:55:01.462197 3506 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:55:01.464815 kubelet[3506]: I0307 00:55:01.462243 3506 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:55:01.464815 kubelet[3506]: I0307 00:55:01.462848 3506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:55:01.472507 kubelet[3506]: E0307 00:55:01.471962 3506 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:55:01.590699 kubelet[3506]: I0307 00:55:01.590647 3506 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-221" Mar 7 00:55:01.612373 kubelet[3506]: I0307 00:55:01.612322 3506 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-221" Mar 7 00:55:01.612496 kubelet[3506]: I0307 00:55:01.612449 3506 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-221" Mar 7 00:55:01.616333 kubelet[3506]: I0307 00:55:01.616282 3506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:55:01.619169 kubelet[3506]: I0307 00:55:01.617109 3506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:55:01.620723 kubelet[3506]: I0307 00:55:01.619914 3506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:55:01.634488 kubelet[3506]: I0307 00:55:01.634432 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:55:01.634635 kubelet[3506]: I0307 00:55:01.634499 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:55:01.634635 kubelet[3506]: I0307 00:55:01.634541 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:55:01.634635 kubelet[3506]: I0307 00:55:01.634581 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:55:01.634784 kubelet[3506]: I0307 00:55:01.634648 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d8db190465e00c8ce672f3c5e1779a7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-221\" (UID: \"6d8db190465e00c8ce672f3c5e1779a7\") " pod="kube-system/kube-controller-manager-ip-172-31-26-221" Mar 7 00:55:01.634784 kubelet[3506]: I0307 00:55:01.634700 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb95b78b959f6b350aa16421bdf655c8-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-221\" (UID: \"cb95b78b959f6b350aa16421bdf655c8\") " pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:55:01.634784 kubelet[3506]: I0307 00:55:01.634734 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/437d0aac0ee7c177d86da4cc0f0a276b-ca-certs\") pod \"kube-apiserver-ip-172-31-26-221\" (UID: \"437d0aac0ee7c177d86da4cc0f0a276b\") " pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:55:01.634784 kubelet[3506]: I0307 00:55:01.634771 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/437d0aac0ee7c177d86da4cc0f0a276b-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-221\" (UID: \"437d0aac0ee7c177d86da4cc0f0a276b\") " pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:55:01.635033 kubelet[3506]: I0307 00:55:01.634816 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/437d0aac0ee7c177d86da4cc0f0a276b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-221\" (UID: \"437d0aac0ee7c177d86da4cc0f0a276b\") " pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:55:01.638419 kubelet[3506]: E0307 00:55:01.637078 3506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-221\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:55:01.641531 kubelet[3506]: E0307 00:55:01.641375 3506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-221\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:55:02.096583 sudo[3519]: pam_unix(sudo:session): session closed for user root Mar 7 00:55:02.113677 kubelet[3506]: I0307 00:55:02.113334 3506 apiserver.go:52] "Watching apiserver" Mar 7 00:55:02.130337 kubelet[3506]: I0307 00:55:02.130256 3506 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:55:02.379586 kubelet[3506]: I0307 00:55:02.379017 3506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:55:02.383268 kubelet[3506]: I0307 00:55:02.381573 3506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:55:02.394272 kubelet[3506]: E0307 00:55:02.393903 3506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-221\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-221" Mar 7 00:55:02.412934 kubelet[3506]: E0307 00:55:02.412884 3506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-221\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-221" Mar 7 00:55:02.437643 kubelet[3506]: I0307 00:55:02.436865 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-221" podStartSLOduration=1.436844367 podStartE2EDuration="1.436844367s" podCreationTimestamp="2026-03-07 00:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:02.435074355 +0000 UTC m=+1.484658332" watchObservedRunningTime="2026-03-07 00:55:02.436844367 +0000 UTC m=+1.486428332" Mar 7 00:55:04.388567 kubelet[3506]: I0307 00:55:04.388349 3506 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:55:04.390384 containerd[2021]: time="2026-03-07T00:55:04.390303977Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:55:04.391959 kubelet[3506]: I0307 00:55:04.391474 3506 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:55:04.791002 sudo[2337]: pam_unix(sudo:session): session closed for user root Mar 7 00:55:04.869719 sshd[2334]: pam_unix(sshd:session): session closed for user core Mar 7 00:55:04.877374 systemd[1]: sshd@6-172.31.26.221:22-20.161.92.111:58698.service: Deactivated successfully. Mar 7 00:55:04.882796 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:55:04.883206 systemd[1]: session-7.scope: Consumed 11.701s CPU time, 151.7M memory peak, 0B memory swap peak. Mar 7 00:55:04.884646 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:55:04.887425 systemd-logind[1993]: Removed session 7. Mar 7 00:55:05.258872 systemd[1]: Created slice kubepods-besteffort-podad1c0fb0_54f4_45f7_9ec3_e83c73d469d7.slice - libcontainer container kubepods-besteffort-podad1c0fb0_54f4_45f7_9ec3_e83c73d469d7.slice. Mar 7 00:55:05.265539 kubelet[3506]: I0307 00:55:05.263508 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7-kube-proxy\") pod \"kube-proxy-rhzgc\" (UID: \"ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7\") " pod="kube-system/kube-proxy-rhzgc" Mar 7 00:55:05.265539 kubelet[3506]: I0307 00:55:05.263562 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7-lib-modules\") pod \"kube-proxy-rhzgc\" (UID: \"ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7\") " pod="kube-system/kube-proxy-rhzgc" Mar 7 00:55:05.265539 kubelet[3506]: I0307 00:55:05.263604 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dk7g\" (UniqueName: \"kubernetes.io/projected/ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7-kube-api-access-9dk7g\") pod \"kube-proxy-rhzgc\" (UID: \"ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7\") " pod="kube-system/kube-proxy-rhzgc" Mar 7 00:55:05.265539 kubelet[3506]: I0307 00:55:05.263659 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7-xtables-lock\") pod \"kube-proxy-rhzgc\" (UID: \"ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7\") " pod="kube-system/kube-proxy-rhzgc" Mar 7 00:55:05.284075 systemd[1]: Created slice kubepods-burstable-pod347db95b_1bb5_4912_802d_8d432587f80e.slice - libcontainer container kubepods-burstable-pod347db95b_1bb5_4912_802d_8d432587f80e.slice. Mar 7 00:55:05.366625 kubelet[3506]: I0307 00:55:05.364730 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cilium-cgroup\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.366625 kubelet[3506]: I0307 00:55:05.364841 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cni-path\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.366625 kubelet[3506]: I0307 00:55:05.364903 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/347db95b-1bb5-4912-802d-8d432587f80e-clustermesh-secrets\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.366625 kubelet[3506]: I0307 00:55:05.364945 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/347db95b-1bb5-4912-802d-8d432587f80e-hubble-tls\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.366625 kubelet[3506]: I0307 00:55:05.365110 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cilium-run\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.366625 kubelet[3506]: I0307 00:55:05.365197 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-hostproc\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.367035 kubelet[3506]: I0307 00:55:05.365307 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-xtables-lock\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.367035 kubelet[3506]: I0307 00:55:05.365349 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/347db95b-1bb5-4912-802d-8d432587f80e-cilium-config-path\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.367035 kubelet[3506]: I0307 00:55:05.365413 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlskn\" (UniqueName: \"kubernetes.io/projected/347db95b-1bb5-4912-802d-8d432587f80e-kube-api-access-nlskn\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.367035 kubelet[3506]: I0307 00:55:05.365536 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-host-proc-sys-net\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.367035 kubelet[3506]: I0307 00:55:05.365578 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-host-proc-sys-kernel\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.368640 kubelet[3506]: I0307 00:55:05.365640 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-etc-cni-netd\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.368640 kubelet[3506]: I0307 00:55:05.365706 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-lib-modules\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.368640 kubelet[3506]: I0307 00:55:05.365750 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-bpf-maps\") pod \"cilium-2n5fx\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " pod="kube-system/cilium-2n5fx" Mar 7 00:55:05.377347 kubelet[3506]: E0307 00:55:05.377284 3506 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 00:55:05.377567 kubelet[3506]: E0307 00:55:05.377545 3506 projected.go:194] Error preparing data for projected volume kube-api-access-9dk7g for pod kube-system/kube-proxy-rhzgc: configmap "kube-root-ca.crt" not found Mar 7 00:55:05.377818 kubelet[3506]: E0307 00:55:05.377796 3506 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7-kube-api-access-9dk7g podName:ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7 nodeName:}" failed. No retries permitted until 2026-03-07 00:55:05.877718066 +0000 UTC m=+4.927302019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9dk7g" (UniqueName: "kubernetes.io/projected/ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7-kube-api-access-9dk7g") pod "kube-proxy-rhzgc" (UID: "ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7") : configmap "kube-root-ca.crt" not found Mar 7 00:55:05.555994 kubelet[3506]: I0307 00:55:05.554813 3506 status_manager.go:895] "Failed to get status for pod" podUID="3dc2a231-e8fd-4a59-8149-a1c884c8c509" pod="kube-system/cilium-operator-6c4d7847fc-r7b7t" err="pods \"cilium-operator-6c4d7847fc-r7b7t\" is forbidden: User \"system:node:ip-172-31-26-221\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-221' and this object" Mar 7 00:55:05.560118 systemd[1]: Created slice kubepods-besteffort-pod3dc2a231_e8fd_4a59_8149_a1c884c8c509.slice - libcontainer container kubepods-besteffort-pod3dc2a231_e8fd_4a59_8149_a1c884c8c509.slice. Mar 7 00:55:05.567337 kubelet[3506]: I0307 00:55:05.567276 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqzrq\" (UniqueName: \"kubernetes.io/projected/3dc2a231-e8fd-4a59-8149-a1c884c8c509-kube-api-access-nqzrq\") pod \"cilium-operator-6c4d7847fc-r7b7t\" (UID: \"3dc2a231-e8fd-4a59-8149-a1c884c8c509\") " pod="kube-system/cilium-operator-6c4d7847fc-r7b7t" Mar 7 00:55:05.567501 kubelet[3506]: I0307 00:55:05.567349 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3dc2a231-e8fd-4a59-8149-a1c884c8c509-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r7b7t\" (UID: \"3dc2a231-e8fd-4a59-8149-a1c884c8c509\") " pod="kube-system/cilium-operator-6c4d7847fc-r7b7t" Mar 7 00:55:05.595089 containerd[2021]: time="2026-03-07T00:55:05.593973271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2n5fx,Uid:347db95b-1bb5-4912-802d-8d432587f80e,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:05.678817 containerd[2021]: time="2026-03-07T00:55:05.678651368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:05.681989 containerd[2021]: time="2026-03-07T00:55:05.679329176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:05.681989 containerd[2021]: time="2026-03-07T00:55:05.681486620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:05.681989 containerd[2021]: time="2026-03-07T00:55:05.681679760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:05.733559 systemd[1]: Started cri-containerd-feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83.scope - libcontainer container feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83. Mar 7 00:55:05.777590 containerd[2021]: time="2026-03-07T00:55:05.777525596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2n5fx,Uid:347db95b-1bb5-4912-802d-8d432587f80e,Namespace:kube-system,Attempt:0,} returns sandbox id \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\"" Mar 7 00:55:05.781191 containerd[2021]: time="2026-03-07T00:55:05.781108640Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 00:55:05.872113 containerd[2021]: time="2026-03-07T00:55:05.872048061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r7b7t,Uid:3dc2a231-e8fd-4a59-8149-a1c884c8c509,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:05.926542 containerd[2021]: time="2026-03-07T00:55:05.926208273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:05.926542 containerd[2021]: time="2026-03-07T00:55:05.926355381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:05.926542 containerd[2021]: time="2026-03-07T00:55:05.926452077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:05.927256 containerd[2021]: time="2026-03-07T00:55:05.927075969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:05.957572 systemd[1]: Started cri-containerd-46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda.scope - libcontainer container 46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda. Mar 7 00:55:06.030210 containerd[2021]: time="2026-03-07T00:55:06.030089861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r7b7t,Uid:3dc2a231-e8fd-4a59-8149-a1c884c8c509,Namespace:kube-system,Attempt:0,} returns sandbox id \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\"" Mar 7 00:55:06.173771 containerd[2021]: time="2026-03-07T00:55:06.173611362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhzgc,Uid:ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:06.218037 containerd[2021]: time="2026-03-07T00:55:06.217518318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:06.218037 containerd[2021]: time="2026-03-07T00:55:06.217616658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:06.218037 containerd[2021]: time="2026-03-07T00:55:06.217716006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:06.218532 containerd[2021]: time="2026-03-07T00:55:06.218167626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:06.252537 systemd[1]: Started cri-containerd-e86a7c5b070e76ddb143194fc6eab79ce57727b6a2b7a67dfe10f78b2b8d1350.scope - libcontainer container e86a7c5b070e76ddb143194fc6eab79ce57727b6a2b7a67dfe10f78b2b8d1350. Mar 7 00:55:06.295957 containerd[2021]: time="2026-03-07T00:55:06.295856875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhzgc,Uid:ad1c0fb0-54f4-45f7-9ec3-e83c73d469d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e86a7c5b070e76ddb143194fc6eab79ce57727b6a2b7a67dfe10f78b2b8d1350\"" Mar 7 00:55:06.306160 containerd[2021]: time="2026-03-07T00:55:06.306100507Z" level=info msg="CreateContainer within sandbox \"e86a7c5b070e76ddb143194fc6eab79ce57727b6a2b7a67dfe10f78b2b8d1350\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:55:06.340215 containerd[2021]: time="2026-03-07T00:55:06.340136707Z" level=info msg="CreateContainer within sandbox \"e86a7c5b070e76ddb143194fc6eab79ce57727b6a2b7a67dfe10f78b2b8d1350\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d2ed60bb777072259e84b0e2df467ce0f489a778a86e920726f92c7da995ec2\"" Mar 7 00:55:06.341577 containerd[2021]: time="2026-03-07T00:55:06.341495731Z" level=info msg="StartContainer for \"5d2ed60bb777072259e84b0e2df467ce0f489a778a86e920726f92c7da995ec2\"" Mar 7 00:55:06.392722 systemd[1]: Started cri-containerd-5d2ed60bb777072259e84b0e2df467ce0f489a778a86e920726f92c7da995ec2.scope - libcontainer container 5d2ed60bb777072259e84b0e2df467ce0f489a778a86e920726f92c7da995ec2. Mar 7 00:55:06.457795 containerd[2021]: time="2026-03-07T00:55:06.457491427Z" level=info msg="StartContainer for \"5d2ed60bb777072259e84b0e2df467ce0f489a778a86e920726f92c7da995ec2\" returns successfully" Mar 7 00:55:09.891788 kubelet[3506]: I0307 00:55:09.890422 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rhzgc" podStartSLOduration=4.890398633 podStartE2EDuration="4.890398633s" podCreationTimestamp="2026-03-07 00:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:07.542335017 +0000 UTC m=+6.591918994" watchObservedRunningTime="2026-03-07 00:55:09.890398633 +0000 UTC m=+8.939982598" Mar 7 00:55:12.482423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473848148.mount: Deactivated successfully. Mar 7 00:55:15.287851 containerd[2021]: time="2026-03-07T00:55:15.287756943Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:15.292319 containerd[2021]: time="2026-03-07T00:55:15.292194291Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Mar 7 00:55:15.294317 containerd[2021]: time="2026-03-07T00:55:15.294203799Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:15.297988 containerd[2021]: time="2026-03-07T00:55:15.297781851Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.516586859s" Mar 7 00:55:15.297988 containerd[2021]: time="2026-03-07T00:55:15.297845739Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 7 00:55:15.301282 containerd[2021]: time="2026-03-07T00:55:15.301184667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 00:55:15.309294 containerd[2021]: time="2026-03-07T00:55:15.309179379Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 00:55:15.340468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974143260.mount: Deactivated successfully. Mar 7 00:55:15.342908 containerd[2021]: time="2026-03-07T00:55:15.341652868Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\"" Mar 7 00:55:15.344076 containerd[2021]: time="2026-03-07T00:55:15.343940248Z" level=info msg="StartContainer for \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\"" Mar 7 00:55:15.415533 systemd[1]: Started cri-containerd-86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07.scope - libcontainer container 86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07. Mar 7 00:55:15.467313 containerd[2021]: time="2026-03-07T00:55:15.467211244Z" level=info msg="StartContainer for \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\" returns successfully" Mar 7 00:55:15.497260 systemd[1]: cri-containerd-86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07.scope: Deactivated successfully. Mar 7 00:55:16.216725 containerd[2021]: time="2026-03-07T00:55:16.216489112Z" level=info msg="shim disconnected" id=86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07 namespace=k8s.io Mar 7 00:55:16.216725 containerd[2021]: time="2026-03-07T00:55:16.216591388Z" level=warning msg="cleaning up after shim disconnected" id=86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07 namespace=k8s.io Mar 7 00:55:16.216725 containerd[2021]: time="2026-03-07T00:55:16.216636472Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:55:16.331710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07-rootfs.mount: Deactivated successfully. Mar 7 00:55:16.474474 containerd[2021]: time="2026-03-07T00:55:16.473544941Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 00:55:16.519557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744645636.mount: Deactivated successfully. Mar 7 00:55:16.551949 containerd[2021]: time="2026-03-07T00:55:16.551870298Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\"" Mar 7 00:55:16.552933 containerd[2021]: time="2026-03-07T00:55:16.552890274Z" level=info msg="StartContainer for \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\"" Mar 7 00:55:16.626570 systemd[1]: Started cri-containerd-453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133.scope - libcontainer container 453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133. Mar 7 00:55:16.682810 containerd[2021]: time="2026-03-07T00:55:16.682725606Z" level=info msg="StartContainer for \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\" returns successfully" Mar 7 00:55:16.716113 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:55:16.718622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:55:16.718761 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:55:16.731008 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:55:16.735205 systemd[1]: cri-containerd-453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133.scope: Deactivated successfully. Mar 7 00:55:16.784293 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:55:16.824039 containerd[2021]: time="2026-03-07T00:55:16.823940647Z" level=info msg="shim disconnected" id=453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133 namespace=k8s.io Mar 7 00:55:16.824039 containerd[2021]: time="2026-03-07T00:55:16.824021323Z" level=warning msg="cleaning up after shim disconnected" id=453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133 namespace=k8s.io Mar 7 00:55:16.825044 containerd[2021]: time="2026-03-07T00:55:16.824043799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:55:17.333690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133-rootfs.mount: Deactivated successfully. Mar 7 00:55:17.491138 containerd[2021]: time="2026-03-07T00:55:17.491038866Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 00:55:17.538710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2517172801.mount: Deactivated successfully. Mar 7 00:55:17.545882 containerd[2021]: time="2026-03-07T00:55:17.545519431Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\"" Mar 7 00:55:17.547730 containerd[2021]: time="2026-03-07T00:55:17.547350871Z" level=info msg="StartContainer for \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\"" Mar 7 00:55:17.643599 systemd[1]: Started cri-containerd-d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018.scope - libcontainer container d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018. Mar 7 00:55:17.746274 containerd[2021]: time="2026-03-07T00:55:17.745208960Z" level=info msg="StartContainer for \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\" returns successfully" Mar 7 00:55:17.756454 systemd[1]: cri-containerd-d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018.scope: Deactivated successfully. Mar 7 00:55:17.874470 containerd[2021]: time="2026-03-07T00:55:17.874391804Z" level=info msg="shim disconnected" id=d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018 namespace=k8s.io Mar 7 00:55:17.874791 containerd[2021]: time="2026-03-07T00:55:17.874760168Z" level=warning msg="cleaning up after shim disconnected" id=d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018 namespace=k8s.io Mar 7 00:55:17.874900 containerd[2021]: time="2026-03-07T00:55:17.874874108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:55:18.133887 containerd[2021]: time="2026-03-07T00:55:18.133814201Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:18.136619 containerd[2021]: time="2026-03-07T00:55:18.136363637Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Mar 7 00:55:18.141259 containerd[2021]: time="2026-03-07T00:55:18.140641673Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:18.143464 containerd[2021]: time="2026-03-07T00:55:18.143405994Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.842124415s" Mar 7 00:55:18.143658 containerd[2021]: time="2026-03-07T00:55:18.143627070Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 7 00:55:18.153831 containerd[2021]: time="2026-03-07T00:55:18.153767022Z" level=info msg="CreateContainer within sandbox \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 00:55:18.179656 containerd[2021]: time="2026-03-07T00:55:18.179457546Z" level=info msg="CreateContainer within sandbox \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\"" Mar 7 00:55:18.182023 containerd[2021]: time="2026-03-07T00:55:18.181969518Z" level=info msg="StartContainer for \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\"" Mar 7 00:55:18.227563 systemd[1]: Started cri-containerd-dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7.scope - libcontainer container dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7. Mar 7 00:55:18.277826 containerd[2021]: time="2026-03-07T00:55:18.277627986Z" level=info msg="StartContainer for \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\" returns successfully" Mar 7 00:55:18.333503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018-rootfs.mount: Deactivated successfully. Mar 7 00:55:18.502873 containerd[2021]: time="2026-03-07T00:55:18.502375915Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 00:55:18.553521 containerd[2021]: time="2026-03-07T00:55:18.553033292Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\"" Mar 7 00:55:18.556888 containerd[2021]: time="2026-03-07T00:55:18.554688704Z" level=info msg="StartContainer for \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\"" Mar 7 00:55:18.571705 kubelet[3506]: I0307 00:55:18.571492 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r7b7t" podStartSLOduration=1.458714299 podStartE2EDuration="13.5714654s" podCreationTimestamp="2026-03-07 00:55:05 +0000 UTC" firstStartedPulling="2026-03-07 00:55:06.032721749 +0000 UTC m=+5.082305714" lastFinishedPulling="2026-03-07 00:55:18.145472862 +0000 UTC m=+17.195056815" observedRunningTime="2026-03-07 00:55:18.510874567 +0000 UTC m=+17.560458532" watchObservedRunningTime="2026-03-07 00:55:18.5714654 +0000 UTC m=+17.621049401" Mar 7 00:55:18.673167 systemd[1]: Started cri-containerd-cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50.scope - libcontainer container cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50. Mar 7 00:55:18.760772 systemd[1]: cri-containerd-cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50.scope: Deactivated successfully. Mar 7 00:55:18.765677 containerd[2021]: time="2026-03-07T00:55:18.764177601Z" level=info msg="StartContainer for \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\" returns successfully" Mar 7 00:55:18.858267 containerd[2021]: time="2026-03-07T00:55:18.858148185Z" level=info msg="shim disconnected" id=cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50 namespace=k8s.io Mar 7 00:55:18.859059 containerd[2021]: time="2026-03-07T00:55:18.858616809Z" level=warning msg="cleaning up after shim disconnected" id=cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50 namespace=k8s.io Mar 7 00:55:18.859059 containerd[2021]: time="2026-03-07T00:55:18.858665949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:55:19.331170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50-rootfs.mount: Deactivated successfully. Mar 7 00:55:19.513178 containerd[2021]: time="2026-03-07T00:55:19.513105308Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 00:55:19.576728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284776356.mount: Deactivated successfully. Mar 7 00:55:19.585999 containerd[2021]: time="2026-03-07T00:55:19.585080469Z" level=info msg="CreateContainer within sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\"" Mar 7 00:55:19.590635 containerd[2021]: time="2026-03-07T00:55:19.590570745Z" level=info msg="StartContainer for \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\"" Mar 7 00:55:19.689569 systemd[1]: Started cri-containerd-00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294.scope - libcontainer container 00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294. Mar 7 00:55:19.813532 containerd[2021]: time="2026-03-07T00:55:19.813456850Z" level=info msg="StartContainer for \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\" returns successfully" Mar 7 00:55:20.286944 kubelet[3506]: I0307 00:55:20.286867 3506 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 00:55:20.400402 systemd[1]: Created slice kubepods-burstable-pod9c4520df_6126_45a9_a3d5_0ec760c50d4d.slice - libcontainer container kubepods-burstable-pod9c4520df_6126_45a9_a3d5_0ec760c50d4d.slice. Mar 7 00:55:20.435918 systemd[1]: Created slice kubepods-burstable-pod1aee960f_bbbb_4e34_a189_b955e20c3ef7.slice - libcontainer container kubepods-burstable-pod1aee960f_bbbb_4e34_a189_b955e20c3ef7.slice. Mar 7 00:55:20.487538 kubelet[3506]: I0307 00:55:20.487455 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aee960f-bbbb-4e34-a189-b955e20c3ef7-config-volume\") pod \"coredns-674b8bbfcf-9qlq4\" (UID: \"1aee960f-bbbb-4e34-a189-b955e20c3ef7\") " pod="kube-system/coredns-674b8bbfcf-9qlq4" Mar 7 00:55:20.487723 kubelet[3506]: I0307 00:55:20.487555 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc6kz\" (UniqueName: \"kubernetes.io/projected/1aee960f-bbbb-4e34-a189-b955e20c3ef7-kube-api-access-rc6kz\") pod \"coredns-674b8bbfcf-9qlq4\" (UID: \"1aee960f-bbbb-4e34-a189-b955e20c3ef7\") " pod="kube-system/coredns-674b8bbfcf-9qlq4" Mar 7 00:55:20.487723 kubelet[3506]: I0307 00:55:20.487613 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c4520df-6126-45a9-a3d5-0ec760c50d4d-config-volume\") pod \"coredns-674b8bbfcf-lvmqq\" (UID: \"9c4520df-6126-45a9-a3d5-0ec760c50d4d\") " pod="kube-system/coredns-674b8bbfcf-lvmqq" Mar 7 00:55:20.487723 kubelet[3506]: I0307 00:55:20.487662 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kg8l\" (UniqueName: \"kubernetes.io/projected/9c4520df-6126-45a9-a3d5-0ec760c50d4d-kube-api-access-6kg8l\") pod \"coredns-674b8bbfcf-lvmqq\" (UID: \"9c4520df-6126-45a9-a3d5-0ec760c50d4d\") " pod="kube-system/coredns-674b8bbfcf-lvmqq" Mar 7 00:55:20.709454 containerd[2021]: time="2026-03-07T00:55:20.708790222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lvmqq,Uid:9c4520df-6126-45a9-a3d5-0ec760c50d4d,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:20.764266 containerd[2021]: time="2026-03-07T00:55:20.764044511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qlq4,Uid:1aee960f-bbbb-4e34-a189-b955e20c3ef7,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:23.488883 (udev-worker)[4304]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:55:23.491982 systemd-networkd[1855]: cilium_host: Link UP Mar 7 00:55:23.494581 systemd-networkd[1855]: cilium_net: Link UP Mar 7 00:55:23.495817 systemd-networkd[1855]: cilium_net: Gained carrier Mar 7 00:55:23.496530 systemd-networkd[1855]: cilium_host: Gained carrier Mar 7 00:55:23.506963 (udev-worker)[4340]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:55:23.704369 systemd-networkd[1855]: cilium_vxlan: Link UP Mar 7 00:55:23.704383 systemd-networkd[1855]: cilium_vxlan: Gained carrier Mar 7 00:55:23.745894 systemd-networkd[1855]: cilium_net: Gained IPv6LL Mar 7 00:55:24.109350 systemd-networkd[1855]: cilium_host: Gained IPv6LL Mar 7 00:55:24.309281 kernel: NET: Registered PF_ALG protocol family Mar 7 00:55:25.322726 systemd-networkd[1855]: cilium_vxlan: Gained IPv6LL Mar 7 00:55:25.725679 systemd-networkd[1855]: lxc_health: Link UP Mar 7 00:55:25.736851 systemd-networkd[1855]: lxc_health: Gained carrier Mar 7 00:55:26.358922 (udev-worker)[4352]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:55:26.363681 systemd-networkd[1855]: lxcdce66a3c575f: Link UP Mar 7 00:55:26.377282 kernel: eth0: renamed from tmpcd3c6 Mar 7 00:55:26.389537 systemd-networkd[1855]: lxc003fd0b68e76: Link UP Mar 7 00:55:26.403520 kernel: eth0: renamed from tmpb798f Mar 7 00:55:26.414012 systemd-networkd[1855]: lxcdce66a3c575f: Gained carrier Mar 7 00:55:26.416987 systemd-networkd[1855]: lxc003fd0b68e76: Gained carrier Mar 7 00:55:27.369947 systemd-networkd[1855]: lxc_health: Gained IPv6LL Mar 7 00:55:27.634031 kubelet[3506]: I0307 00:55:27.633817 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2n5fx" podStartSLOduration=13.113825442 podStartE2EDuration="22.633797213s" podCreationTimestamp="2026-03-07 00:55:05 +0000 UTC" firstStartedPulling="2026-03-07 00:55:05.779997776 +0000 UTC m=+4.829581729" lastFinishedPulling="2026-03-07 00:55:15.299969547 +0000 UTC m=+14.349553500" observedRunningTime="2026-03-07 00:55:20.556285713 +0000 UTC m=+19.605869690" watchObservedRunningTime="2026-03-07 00:55:27.633797213 +0000 UTC m=+26.683381178" Mar 7 00:55:28.265950 systemd-networkd[1855]: lxcdce66a3c575f: Gained IPv6LL Mar 7 00:55:28.329648 systemd-networkd[1855]: lxc003fd0b68e76: Gained IPv6LL Mar 7 00:55:28.824131 kubelet[3506]: I0307 00:55:28.824062 3506 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:55:30.951969 ntpd[1988]: Listen normally on 7 cilium_host 192.168.0.26:123 Mar 7 00:55:30.953388 ntpd[1988]: 7 Mar 00:55:30 ntpd[1988]: Listen normally on 7 cilium_host 192.168.0.26:123 Mar 7 00:55:30.953388 ntpd[1988]: 7 Mar 00:55:30 ntpd[1988]: Listen normally on 8 cilium_net [fe80::7cfa:c7ff:fe36:423d%4]:123 Mar 7 00:55:30.953388 ntpd[1988]: 7 Mar 00:55:30 ntpd[1988]: Listen normally on 9 cilium_host [fe80::90c7:2dff:fe08:a974%5]:123 Mar 7 00:55:30.953388 ntpd[1988]: 7 Mar 00:55:30 ntpd[1988]: Listen normally on 10 cilium_vxlan [fe80::385f:b9ff:fe7d:8b4d%6]:123 Mar 7 00:55:30.953388 ntpd[1988]: 7 Mar 00:55:30 ntpd[1988]: Listen normally on 11 lxc_health [fe80::b0e9:65ff:fef4:cc5b%8]:123 Mar 7 00:55:30.953388 ntpd[1988]: 7 Mar 00:55:30 ntpd[1988]: Listen normally on 12 lxcdce66a3c575f [fe80::38b8:ffff:feb9:f303%10]:123 Mar 7 00:55:30.953388 ntpd[1988]: 7 Mar 00:55:30 ntpd[1988]: Listen normally on 13 lxc003fd0b68e76 [fe80::d8c6:21ff:fe9b:4d8e%12]:123 Mar 7 00:55:30.952101 ntpd[1988]: Listen normally on 8 cilium_net [fe80::7cfa:c7ff:fe36:423d%4]:123 Mar 7 00:55:30.952186 ntpd[1988]: Listen normally on 9 cilium_host [fe80::90c7:2dff:fe08:a974%5]:123 Mar 7 00:55:30.952860 ntpd[1988]: Listen normally on 10 cilium_vxlan [fe80::385f:b9ff:fe7d:8b4d%6]:123 Mar 7 00:55:30.952955 ntpd[1988]: Listen normally on 11 lxc_health [fe80::b0e9:65ff:fef4:cc5b%8]:123 Mar 7 00:55:30.953026 ntpd[1988]: Listen normally on 12 lxcdce66a3c575f [fe80::38b8:ffff:feb9:f303%10]:123 Mar 7 00:55:30.953096 ntpd[1988]: Listen normally on 13 lxc003fd0b68e76 [fe80::d8c6:21ff:fe9b:4d8e%12]:123 Mar 7 00:55:35.105962 containerd[2021]: time="2026-03-07T00:55:35.105580378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:35.105962 containerd[2021]: time="2026-03-07T00:55:35.105723178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:35.107832 containerd[2021]: time="2026-03-07T00:55:35.105902206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:35.107832 containerd[2021]: time="2026-03-07T00:55:35.106472902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:35.181553 systemd[1]: Started cri-containerd-cd3c65f090ed5d9f5f49761f6e9bd93a5dd92a9f30b6cbd6ebe89790a7c91459.scope - libcontainer container cd3c65f090ed5d9f5f49761f6e9bd93a5dd92a9f30b6cbd6ebe89790a7c91459. Mar 7 00:55:35.267114 containerd[2021]: time="2026-03-07T00:55:35.266855963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:35.267301 containerd[2021]: time="2026-03-07T00:55:35.267193295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:35.267370 containerd[2021]: time="2026-03-07T00:55:35.267308075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:35.268271 containerd[2021]: time="2026-03-07T00:55:35.267549371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:35.340673 systemd[1]: run-containerd-runc-k8s.io-b798f2a9f3ad65b5c24c06da2b510eba20556b8149f0ff03bf9b77e8ce611243-runc.d9mttE.mount: Deactivated successfully. Mar 7 00:55:35.362645 systemd[1]: Started cri-containerd-b798f2a9f3ad65b5c24c06da2b510eba20556b8149f0ff03bf9b77e8ce611243.scope - libcontainer container b798f2a9f3ad65b5c24c06da2b510eba20556b8149f0ff03bf9b77e8ce611243. Mar 7 00:55:35.381335 containerd[2021]: time="2026-03-07T00:55:35.380612531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lvmqq,Uid:9c4520df-6126-45a9-a3d5-0ec760c50d4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd3c65f090ed5d9f5f49761f6e9bd93a5dd92a9f30b6cbd6ebe89790a7c91459\"" Mar 7 00:55:35.394198 containerd[2021]: time="2026-03-07T00:55:35.394002551Z" level=info msg="CreateContainer within sandbox \"cd3c65f090ed5d9f5f49761f6e9bd93a5dd92a9f30b6cbd6ebe89790a7c91459\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:55:35.430719 containerd[2021]: time="2026-03-07T00:55:35.430637771Z" level=info msg="CreateContainer within sandbox \"cd3c65f090ed5d9f5f49761f6e9bd93a5dd92a9f30b6cbd6ebe89790a7c91459\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca1e8df6eb26d51ca98403d37eca0e25f233d7713ef4583fbbbda38d89d0e9bc\"" Mar 7 00:55:35.431971 containerd[2021]: time="2026-03-07T00:55:35.431851367Z" level=info msg="StartContainer for \"ca1e8df6eb26d51ca98403d37eca0e25f233d7713ef4583fbbbda38d89d0e9bc\"" Mar 7 00:55:35.516055 containerd[2021]: time="2026-03-07T00:55:35.515845608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qlq4,Uid:1aee960f-bbbb-4e34-a189-b955e20c3ef7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b798f2a9f3ad65b5c24c06da2b510eba20556b8149f0ff03bf9b77e8ce611243\"" Mar 7 00:55:35.520579 systemd[1]: Started cri-containerd-ca1e8df6eb26d51ca98403d37eca0e25f233d7713ef4583fbbbda38d89d0e9bc.scope - libcontainer container ca1e8df6eb26d51ca98403d37eca0e25f233d7713ef4583fbbbda38d89d0e9bc. Mar 7 00:55:35.533040 containerd[2021]: time="2026-03-07T00:55:35.531853560Z" level=info msg="CreateContainer within sandbox \"b798f2a9f3ad65b5c24c06da2b510eba20556b8149f0ff03bf9b77e8ce611243\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:55:35.576922 containerd[2021]: time="2026-03-07T00:55:35.576843552Z" level=info msg="CreateContainer within sandbox \"b798f2a9f3ad65b5c24c06da2b510eba20556b8149f0ff03bf9b77e8ce611243\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d79774bd69715572205e9a0feb650f64e0911be7302853622916d4450ed7cbcd\"" Mar 7 00:55:35.585696 containerd[2021]: time="2026-03-07T00:55:35.585259284Z" level=info msg="StartContainer for \"d79774bd69715572205e9a0feb650f64e0911be7302853622916d4450ed7cbcd\"" Mar 7 00:55:35.653937 containerd[2021]: time="2026-03-07T00:55:35.653844888Z" level=info msg="StartContainer for \"ca1e8df6eb26d51ca98403d37eca0e25f233d7713ef4583fbbbda38d89d0e9bc\" returns successfully" Mar 7 00:55:35.686788 systemd[1]: Started cri-containerd-d79774bd69715572205e9a0feb650f64e0911be7302853622916d4450ed7cbcd.scope - libcontainer container d79774bd69715572205e9a0feb650f64e0911be7302853622916d4450ed7cbcd. Mar 7 00:55:35.765803 containerd[2021]: time="2026-03-07T00:55:35.765712177Z" level=info msg="StartContainer for \"d79774bd69715572205e9a0feb650f64e0911be7302853622916d4450ed7cbcd\" returns successfully" Mar 7 00:55:36.680085 kubelet[3506]: I0307 00:55:36.679980 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9qlq4" podStartSLOduration=31.679956578 podStartE2EDuration="31.679956578s" podCreationTimestamp="2026-03-07 00:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:36.657784033 +0000 UTC m=+35.707368010" watchObservedRunningTime="2026-03-07 00:55:36.679956578 +0000 UTC m=+35.729540543" Mar 7 00:55:36.722203 kubelet[3506]: I0307 00:55:36.722088 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lvmqq" podStartSLOduration=31.722063258 podStartE2EDuration="31.722063258s" podCreationTimestamp="2026-03-07 00:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:36.682632374 +0000 UTC m=+35.732216363" watchObservedRunningTime="2026-03-07 00:55:36.722063258 +0000 UTC m=+35.771647223" Mar 7 00:55:43.269760 systemd[1]: Started sshd@7-172.31.26.221:22-20.161.92.111:55800.service - OpenSSH per-connection server daemon (20.161.92.111:55800). Mar 7 00:55:43.780755 sshd[4884]: Accepted publickey for core from 20.161.92.111 port 55800 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:55:43.783667 sshd[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:55:43.793651 systemd-logind[1993]: New session 8 of user core. Mar 7 00:55:43.801542 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:55:44.285292 sshd[4884]: pam_unix(sshd:session): session closed for user core Mar 7 00:55:44.291934 systemd[1]: sshd@7-172.31.26.221:22-20.161.92.111:55800.service: Deactivated successfully. Mar 7 00:55:44.297688 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:55:44.302677 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:55:44.304853 systemd-logind[1993]: Removed session 8. Mar 7 00:55:49.384799 systemd[1]: Started sshd@8-172.31.26.221:22-20.161.92.111:55814.service - OpenSSH per-connection server daemon (20.161.92.111:55814). Mar 7 00:55:49.884834 sshd[4898]: Accepted publickey for core from 20.161.92.111 port 55814 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:55:49.887543 sshd[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:55:49.895890 systemd-logind[1993]: New session 9 of user core. Mar 7 00:55:49.901518 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:55:50.358541 sshd[4898]: pam_unix(sshd:session): session closed for user core Mar 7 00:55:50.364639 systemd[1]: sshd@8-172.31.26.221:22-20.161.92.111:55814.service: Deactivated successfully. Mar 7 00:55:50.369043 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:55:50.376073 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:55:50.378666 systemd-logind[1993]: Removed session 9. Mar 7 00:55:55.456788 systemd[1]: Started sshd@9-172.31.26.221:22-20.161.92.111:39834.service - OpenSSH per-connection server daemon (20.161.92.111:39834). Mar 7 00:55:55.965142 sshd[4912]: Accepted publickey for core from 20.161.92.111 port 39834 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:55:55.968021 sshd[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:55:55.975997 systemd-logind[1993]: New session 10 of user core. Mar 7 00:55:55.985566 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:55:56.440787 sshd[4912]: pam_unix(sshd:session): session closed for user core Mar 7 00:55:56.448944 systemd[1]: sshd@9-172.31.26.221:22-20.161.92.111:39834.service: Deactivated successfully. Mar 7 00:55:56.449155 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:55:56.453666 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:55:56.457899 systemd-logind[1993]: Removed session 10. Mar 7 00:56:01.534759 systemd[1]: Started sshd@10-172.31.26.221:22-20.161.92.111:45520.service - OpenSSH per-connection server daemon (20.161.92.111:45520). Mar 7 00:56:02.060298 sshd[4928]: Accepted publickey for core from 20.161.92.111 port 45520 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:02.064712 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:02.073631 systemd-logind[1993]: New session 11 of user core. Mar 7 00:56:02.081505 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:56:02.530398 sshd[4928]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:02.538945 systemd[1]: sshd@10-172.31.26.221:22-20.161.92.111:45520.service: Deactivated successfully. Mar 7 00:56:02.544730 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:56:02.546792 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:56:02.550077 systemd-logind[1993]: Removed session 11. Mar 7 00:56:07.636948 systemd[1]: Started sshd@11-172.31.26.221:22-20.161.92.111:45524.service - OpenSSH per-connection server daemon (20.161.92.111:45524). Mar 7 00:56:08.159269 sshd[4944]: Accepted publickey for core from 20.161.92.111 port 45524 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:08.161505 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:08.169098 systemd-logind[1993]: New session 12 of user core. Mar 7 00:56:08.182728 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:56:08.631191 sshd[4944]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:08.638695 systemd[1]: sshd@11-172.31.26.221:22-20.161.92.111:45524.service: Deactivated successfully. Mar 7 00:56:08.644956 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:56:08.648928 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:56:08.651454 systemd-logind[1993]: Removed session 12. Mar 7 00:56:08.733771 systemd[1]: Started sshd@12-172.31.26.221:22-20.161.92.111:45530.service - OpenSSH per-connection server daemon (20.161.92.111:45530). Mar 7 00:56:09.241269 sshd[4958]: Accepted publickey for core from 20.161.92.111 port 45530 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:09.243769 sshd[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:09.252783 systemd-logind[1993]: New session 13 of user core. Mar 7 00:56:09.260533 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:56:09.796952 sshd[4958]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:09.805820 systemd[1]: sshd@12-172.31.26.221:22-20.161.92.111:45530.service: Deactivated successfully. Mar 7 00:56:09.811023 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:56:09.812606 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:56:09.814912 systemd-logind[1993]: Removed session 13. Mar 7 00:56:09.893762 systemd[1]: Started sshd@13-172.31.26.221:22-20.161.92.111:45544.service - OpenSSH per-connection server daemon (20.161.92.111:45544). Mar 7 00:56:10.400725 sshd[4968]: Accepted publickey for core from 20.161.92.111 port 45544 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:10.404254 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:10.412760 systemd-logind[1993]: New session 14 of user core. Mar 7 00:56:10.422561 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:56:10.874432 sshd[4968]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:10.883807 systemd[1]: sshd@13-172.31.26.221:22-20.161.92.111:45544.service: Deactivated successfully. Mar 7 00:56:10.888745 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:56:10.891532 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:56:10.893736 systemd-logind[1993]: Removed session 14. Mar 7 00:56:15.971748 systemd[1]: Started sshd@14-172.31.26.221:22-20.161.92.111:44792.service - OpenSSH per-connection server daemon (20.161.92.111:44792). Mar 7 00:56:16.474397 sshd[4981]: Accepted publickey for core from 20.161.92.111 port 44792 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:16.476971 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:16.486249 systemd-logind[1993]: New session 15 of user core. Mar 7 00:56:16.491548 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:56:16.949860 sshd[4981]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:16.956847 systemd[1]: sshd@14-172.31.26.221:22-20.161.92.111:44792.service: Deactivated successfully. Mar 7 00:56:16.962177 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:56:16.964215 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:56:16.966138 systemd-logind[1993]: Removed session 15. Mar 7 00:56:22.046803 systemd[1]: Started sshd@15-172.31.26.221:22-20.161.92.111:45936.service - OpenSSH per-connection server daemon (20.161.92.111:45936). Mar 7 00:56:22.563272 sshd[4995]: Accepted publickey for core from 20.161.92.111 port 45936 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:22.565514 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:22.574337 systemd-logind[1993]: New session 16 of user core. Mar 7 00:56:22.582537 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:56:23.045882 sshd[4995]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:23.054452 systemd[1]: sshd@15-172.31.26.221:22-20.161.92.111:45936.service: Deactivated successfully. Mar 7 00:56:23.062591 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:56:23.066398 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:56:23.068848 systemd-logind[1993]: Removed session 16. Mar 7 00:56:28.142799 systemd[1]: Started sshd@16-172.31.26.221:22-20.161.92.111:45952.service - OpenSSH per-connection server daemon (20.161.92.111:45952). Mar 7 00:56:28.650990 sshd[5007]: Accepted publickey for core from 20.161.92.111 port 45952 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:28.653551 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:28.661481 systemd-logind[1993]: New session 17 of user core. Mar 7 00:56:28.668546 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:56:29.135564 sshd[5007]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:29.142358 systemd[1]: sshd@16-172.31.26.221:22-20.161.92.111:45952.service: Deactivated successfully. Mar 7 00:56:29.150094 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:56:29.151707 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:56:29.153531 systemd-logind[1993]: Removed session 17. Mar 7 00:56:29.235766 systemd[1]: Started sshd@17-172.31.26.221:22-20.161.92.111:45966.service - OpenSSH per-connection server daemon (20.161.92.111:45966). Mar 7 00:56:29.736553 sshd[5020]: Accepted publickey for core from 20.161.92.111 port 45966 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:29.739302 sshd[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:29.750128 systemd-logind[1993]: New session 18 of user core. Mar 7 00:56:29.752511 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:56:30.292584 sshd[5020]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:30.300373 systemd[1]: sshd@17-172.31.26.221:22-20.161.92.111:45966.service: Deactivated successfully. Mar 7 00:56:30.305572 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:56:30.307573 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:56:30.310015 systemd-logind[1993]: Removed session 18. Mar 7 00:56:30.394778 systemd[1]: Started sshd@18-172.31.26.221:22-20.161.92.111:48510.service - OpenSSH per-connection server daemon (20.161.92.111:48510). Mar 7 00:56:30.907765 sshd[5030]: Accepted publickey for core from 20.161.92.111 port 48510 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:30.910476 sshd[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:30.919272 systemd-logind[1993]: New session 19 of user core. Mar 7 00:56:30.928515 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:56:32.241538 sshd[5030]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:32.248786 systemd[1]: sshd@18-172.31.26.221:22-20.161.92.111:48510.service: Deactivated successfully. Mar 7 00:56:32.258109 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:56:32.261673 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:56:32.264721 systemd-logind[1993]: Removed session 19. Mar 7 00:56:32.337789 systemd[1]: Started sshd@19-172.31.26.221:22-20.161.92.111:48524.service - OpenSSH per-connection server daemon (20.161.92.111:48524). Mar 7 00:56:32.841147 sshd[5048]: Accepted publickey for core from 20.161.92.111 port 48524 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:32.842873 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:32.851819 systemd-logind[1993]: New session 20 of user core. Mar 7 00:56:32.865515 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:56:33.578012 sshd[5048]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:33.584249 systemd[1]: sshd@19-172.31.26.221:22-20.161.92.111:48524.service: Deactivated successfully. Mar 7 00:56:33.587710 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:56:33.589068 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:56:33.591916 systemd-logind[1993]: Removed session 20. Mar 7 00:56:33.669745 systemd[1]: Started sshd@20-172.31.26.221:22-20.161.92.111:48532.service - OpenSSH per-connection server daemon (20.161.92.111:48532). Mar 7 00:56:34.184755 sshd[5059]: Accepted publickey for core from 20.161.92.111 port 48532 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:34.187442 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:34.198197 systemd-logind[1993]: New session 21 of user core. Mar 7 00:56:34.202538 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 00:56:34.652156 sshd[5059]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:34.660010 systemd[1]: sshd@20-172.31.26.221:22-20.161.92.111:48532.service: Deactivated successfully. Mar 7 00:56:34.664130 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 00:56:34.665968 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Mar 7 00:56:34.668931 systemd-logind[1993]: Removed session 21. Mar 7 00:56:39.748773 systemd[1]: Started sshd@21-172.31.26.221:22-20.161.92.111:48542.service - OpenSSH per-connection server daemon (20.161.92.111:48542). Mar 7 00:56:40.252031 sshd[5073]: Accepted publickey for core from 20.161.92.111 port 48542 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:40.254647 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:40.262538 systemd-logind[1993]: New session 22 of user core. Mar 7 00:56:40.269485 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 00:56:40.718554 sshd[5073]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:40.726387 systemd[1]: sshd@21-172.31.26.221:22-20.161.92.111:48542.service: Deactivated successfully. Mar 7 00:56:40.730007 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 00:56:40.732159 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Mar 7 00:56:40.734289 systemd-logind[1993]: Removed session 22. Mar 7 00:56:45.821993 systemd[1]: Started sshd@22-172.31.26.221:22-20.161.92.111:33102.service - OpenSSH per-connection server daemon (20.161.92.111:33102). Mar 7 00:56:46.342680 sshd[5088]: Accepted publickey for core from 20.161.92.111 port 33102 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:46.345380 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:46.353137 systemd-logind[1993]: New session 23 of user core. Mar 7 00:56:46.365476 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 00:56:46.821876 sshd[5088]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:46.830020 systemd[1]: sshd@22-172.31.26.221:22-20.161.92.111:33102.service: Deactivated successfully. Mar 7 00:56:46.835113 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 00:56:46.836681 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Mar 7 00:56:46.839547 systemd-logind[1993]: Removed session 23. Mar 7 00:56:51.915760 systemd[1]: Started sshd@23-172.31.26.221:22-20.161.92.111:55898.service - OpenSSH per-connection server daemon (20.161.92.111:55898). Mar 7 00:56:52.427737 sshd[5101]: Accepted publickey for core from 20.161.92.111 port 55898 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:52.429526 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:52.438393 systemd-logind[1993]: New session 24 of user core. Mar 7 00:56:52.443501 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 00:56:52.897519 sshd[5101]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:52.904429 systemd[1]: sshd@23-172.31.26.221:22-20.161.92.111:55898.service: Deactivated successfully. Mar 7 00:56:52.909576 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 00:56:52.911246 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Mar 7 00:56:52.913171 systemd-logind[1993]: Removed session 24. Mar 7 00:56:52.990775 systemd[1]: Started sshd@24-172.31.26.221:22-20.161.92.111:55914.service - OpenSSH per-connection server daemon (20.161.92.111:55914). Mar 7 00:56:53.501278 sshd[5113]: Accepted publickey for core from 20.161.92.111 port 55914 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:53.504021 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:53.512387 systemd-logind[1993]: New session 25 of user core. Mar 7 00:56:53.524536 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 00:56:56.274760 systemd[1]: run-containerd-runc-k8s.io-00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294-runc.8p0416.mount: Deactivated successfully. Mar 7 00:56:56.281673 containerd[2021]: time="2026-03-07T00:56:56.278481329Z" level=info msg="StopContainer for \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\" with timeout 30 (s)" Mar 7 00:56:56.286361 containerd[2021]: time="2026-03-07T00:56:56.283363577Z" level=info msg="Stop container \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\" with signal terminated" Mar 7 00:56:56.306975 containerd[2021]: time="2026-03-07T00:56:56.306851717Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:56:56.314105 systemd[1]: cri-containerd-dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7.scope: Deactivated successfully. Mar 7 00:56:56.326165 containerd[2021]: time="2026-03-07T00:56:56.326094797Z" level=info msg="StopContainer for \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\" with timeout 2 (s)" Mar 7 00:56:56.326946 containerd[2021]: time="2026-03-07T00:56:56.326802125Z" level=info msg="Stop container \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\" with signal terminated" Mar 7 00:56:56.348682 systemd-networkd[1855]: lxc_health: Link DOWN Mar 7 00:56:56.348702 systemd-networkd[1855]: lxc_health: Lost carrier Mar 7 00:56:56.393039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7-rootfs.mount: Deactivated successfully. Mar 7 00:56:56.395786 systemd[1]: cri-containerd-00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294.scope: Deactivated successfully. Mar 7 00:56:56.396536 systemd[1]: cri-containerd-00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294.scope: Consumed 15.201s CPU time. Mar 7 00:56:56.407094 containerd[2021]: time="2026-03-07T00:56:56.406755450Z" level=info msg="shim disconnected" id=dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7 namespace=k8s.io Mar 7 00:56:56.407094 containerd[2021]: time="2026-03-07T00:56:56.406829010Z" level=warning msg="cleaning up after shim disconnected" id=dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7 namespace=k8s.io Mar 7 00:56:56.407094 containerd[2021]: time="2026-03-07T00:56:56.406849650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:56:56.442429 containerd[2021]: time="2026-03-07T00:56:56.442357626Z" level=info msg="StopContainer for \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\" returns successfully" Mar 7 00:56:56.444874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294-rootfs.mount: Deactivated successfully. Mar 7 00:56:56.448314 containerd[2021]: time="2026-03-07T00:56:56.447926214Z" level=info msg="StopPodSandbox for \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\"" Mar 7 00:56:56.448314 containerd[2021]: time="2026-03-07T00:56:56.448076490Z" level=info msg="Container to stop \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:56:56.454925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda-shm.mount: Deactivated successfully. Mar 7 00:56:56.457020 containerd[2021]: time="2026-03-07T00:56:56.456700674Z" level=info msg="shim disconnected" id=00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294 namespace=k8s.io Mar 7 00:56:56.457020 containerd[2021]: time="2026-03-07T00:56:56.456777762Z" level=warning msg="cleaning up after shim disconnected" id=00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294 namespace=k8s.io Mar 7 00:56:56.457020 containerd[2021]: time="2026-03-07T00:56:56.456797514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:56:56.467885 systemd[1]: cri-containerd-46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda.scope: Deactivated successfully. Mar 7 00:56:56.498534 containerd[2021]: time="2026-03-07T00:56:56.498298146Z" level=info msg="StopContainer for \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\" returns successfully" Mar 7 00:56:56.499575 containerd[2021]: time="2026-03-07T00:56:56.499142838Z" level=info msg="StopPodSandbox for \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\"" Mar 7 00:56:56.499575 containerd[2021]: time="2026-03-07T00:56:56.499208346Z" level=info msg="Container to stop \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:56:56.499575 containerd[2021]: time="2026-03-07T00:56:56.499285470Z" level=info msg="Container to stop \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:56:56.499575 containerd[2021]: time="2026-03-07T00:56:56.499310178Z" level=info msg="Container to stop \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:56:56.499575 containerd[2021]: time="2026-03-07T00:56:56.499333458Z" level=info msg="Container to stop \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:56:56.499575 containerd[2021]: time="2026-03-07T00:56:56.499357218Z" level=info msg="Container to stop \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 00:56:56.502578 kubelet[3506]: E0307 00:56:56.502404 3506 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 00:56:56.517919 systemd[1]: cri-containerd-feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83.scope: Deactivated successfully. Mar 7 00:56:56.539109 containerd[2021]: time="2026-03-07T00:56:56.537280722Z" level=info msg="shim disconnected" id=46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda namespace=k8s.io Mar 7 00:56:56.539109 containerd[2021]: time="2026-03-07T00:56:56.537361818Z" level=warning msg="cleaning up after shim disconnected" id=46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda namespace=k8s.io Mar 7 00:56:56.539109 containerd[2021]: time="2026-03-07T00:56:56.537383694Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:56:56.577183 containerd[2021]: time="2026-03-07T00:56:56.577133046Z" level=info msg="TearDown network for sandbox \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\" successfully" Mar 7 00:56:56.577824 containerd[2021]: time="2026-03-07T00:56:56.577597878Z" level=info msg="StopPodSandbox for \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\" returns successfully" Mar 7 00:56:56.578457 containerd[2021]: time="2026-03-07T00:56:56.578048094Z" level=info msg="shim disconnected" id=feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83 namespace=k8s.io Mar 7 00:56:56.578457 containerd[2021]: time="2026-03-07T00:56:56.578148114Z" level=warning msg="cleaning up after shim disconnected" id=feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83 namespace=k8s.io Mar 7 00:56:56.578457 containerd[2021]: time="2026-03-07T00:56:56.578215926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:56:56.610344 containerd[2021]: time="2026-03-07T00:56:56.610287271Z" level=info msg="TearDown network for sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" successfully" Mar 7 00:56:56.610550 containerd[2021]: time="2026-03-07T00:56:56.610521295Z" level=info msg="StopPodSandbox for \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" returns successfully" Mar 7 00:56:56.749958 kubelet[3506]: I0307 00:56:56.749891 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/347db95b-1bb5-4912-802d-8d432587f80e-cilium-config-path\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750167 kubelet[3506]: I0307 00:56:56.749969 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqzrq\" (UniqueName: \"kubernetes.io/projected/3dc2a231-e8fd-4a59-8149-a1c884c8c509-kube-api-access-nqzrq\") pod \"3dc2a231-e8fd-4a59-8149-a1c884c8c509\" (UID: \"3dc2a231-e8fd-4a59-8149-a1c884c8c509\") " Mar 7 00:56:56.750167 kubelet[3506]: I0307 00:56:56.750057 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cilium-cgroup\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750167 kubelet[3506]: I0307 00:56:56.750133 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-lib-modules\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750392 kubelet[3506]: I0307 00:56:56.750175 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-host-proc-sys-net\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750392 kubelet[3506]: I0307 00:56:56.750276 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/347db95b-1bb5-4912-802d-8d432587f80e-hubble-tls\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750506 kubelet[3506]: I0307 00:56:56.750413 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/347db95b-1bb5-4912-802d-8d432587f80e-clustermesh-secrets\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750506 kubelet[3506]: I0307 00:56:56.750449 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-etc-cni-netd\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750620 kubelet[3506]: I0307 00:56:56.750512 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-bpf-maps\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750620 kubelet[3506]: I0307 00:56:56.750546 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cilium-run\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750620 kubelet[3506]: I0307 00:56:56.750611 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-host-proc-sys-kernel\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750798 kubelet[3506]: I0307 00:56:56.750683 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlskn\" (UniqueName: \"kubernetes.io/projected/347db95b-1bb5-4912-802d-8d432587f80e-kube-api-access-nlskn\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750798 kubelet[3506]: I0307 00:56:56.750721 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-hostproc\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750798 kubelet[3506]: I0307 00:56:56.750789 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3dc2a231-e8fd-4a59-8149-a1c884c8c509-cilium-config-path\") pod \"3dc2a231-e8fd-4a59-8149-a1c884c8c509\" (UID: \"3dc2a231-e8fd-4a59-8149-a1c884c8c509\") " Mar 7 00:56:56.750962 kubelet[3506]: I0307 00:56:56.750868 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cni-path\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.750962 kubelet[3506]: I0307 00:56:56.750936 3506 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-xtables-lock\") pod \"347db95b-1bb5-4912-802d-8d432587f80e\" (UID: \"347db95b-1bb5-4912-802d-8d432587f80e\") " Mar 7 00:56:56.752693 kubelet[3506]: I0307 00:56:56.751144 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.752693 kubelet[3506]: I0307 00:56:56.751691 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.752693 kubelet[3506]: I0307 00:56:56.751846 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.752693 kubelet[3506]: I0307 00:56:56.751923 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.752693 kubelet[3506]: I0307 00:56:56.752061 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.753607 kubelet[3506]: I0307 00:56:56.752143 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.753748 kubelet[3506]: I0307 00:56:56.752565 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-hostproc" (OuterVolumeSpecName: "hostproc") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.753748 kubelet[3506]: I0307 00:56:56.753681 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cni-path" (OuterVolumeSpecName: "cni-path") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.754491 kubelet[3506]: I0307 00:56:56.754301 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.755179 kubelet[3506]: I0307 00:56:56.754766 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 00:56:56.764253 kubelet[3506]: I0307 00:56:56.763624 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347db95b-1bb5-4912-802d-8d432587f80e-kube-api-access-nlskn" (OuterVolumeSpecName: "kube-api-access-nlskn") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "kube-api-access-nlskn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:56:56.768059 kubelet[3506]: I0307 00:56:56.767912 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc2a231-e8fd-4a59-8149-a1c884c8c509-kube-api-access-nqzrq" (OuterVolumeSpecName: "kube-api-access-nqzrq") pod "3dc2a231-e8fd-4a59-8149-a1c884c8c509" (UID: "3dc2a231-e8fd-4a59-8149-a1c884c8c509"). InnerVolumeSpecName "kube-api-access-nqzrq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:56:56.768559 kubelet[3506]: I0307 00:56:56.768507 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/347db95b-1bb5-4912-802d-8d432587f80e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:56:56.771876 kubelet[3506]: I0307 00:56:56.771805 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dc2a231-e8fd-4a59-8149-a1c884c8c509-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3dc2a231-e8fd-4a59-8149-a1c884c8c509" (UID: "3dc2a231-e8fd-4a59-8149-a1c884c8c509"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:56:56.772809 kubelet[3506]: I0307 00:56:56.772767 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/347db95b-1bb5-4912-802d-8d432587f80e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:56:56.774040 kubelet[3506]: I0307 00:56:56.773953 3506 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347db95b-1bb5-4912-802d-8d432587f80e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "347db95b-1bb5-4912-802d-8d432587f80e" (UID: "347db95b-1bb5-4912-802d-8d432587f80e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:56:56.850194 kubelet[3506]: I0307 00:56:56.847023 3506 scope.go:117] "RemoveContainer" containerID="dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7" Mar 7 00:56:56.851569 kubelet[3506]: I0307 00:56:56.851530 3506 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqzrq\" (UniqueName: \"kubernetes.io/projected/3dc2a231-e8fd-4a59-8149-a1c884c8c509-kube-api-access-nqzrq\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852018 kubelet[3506]: I0307 00:56:56.851784 3506 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cilium-cgroup\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852018 kubelet[3506]: I0307 00:56:56.851817 3506 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-lib-modules\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852018 kubelet[3506]: I0307 00:56:56.851868 3506 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-host-proc-sys-net\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852018 kubelet[3506]: I0307 00:56:56.851893 3506 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/347db95b-1bb5-4912-802d-8d432587f80e-hubble-tls\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852018 kubelet[3506]: I0307 00:56:56.851940 3506 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/347db95b-1bb5-4912-802d-8d432587f80e-clustermesh-secrets\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852550 kubelet[3506]: I0307 00:56:56.851966 3506 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-etc-cni-netd\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852550 kubelet[3506]: I0307 00:56:56.852292 3506 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-bpf-maps\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852550 kubelet[3506]: I0307 00:56:56.852318 3506 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cilium-run\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852978 kubelet[3506]: I0307 00:56:56.852720 3506 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-host-proc-sys-kernel\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852978 kubelet[3506]: I0307 00:56:56.852754 3506 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nlskn\" (UniqueName: \"kubernetes.io/projected/347db95b-1bb5-4912-802d-8d432587f80e-kube-api-access-nlskn\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852978 kubelet[3506]: I0307 00:56:56.852775 3506 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-hostproc\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852978 kubelet[3506]: I0307 00:56:56.852828 3506 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3dc2a231-e8fd-4a59-8149-a1c884c8c509-cilium-config-path\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852978 kubelet[3506]: I0307 00:56:56.852853 3506 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-cni-path\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852978 kubelet[3506]: I0307 00:56:56.852897 3506 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/347db95b-1bb5-4912-802d-8d432587f80e-xtables-lock\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.852978 kubelet[3506]: I0307 00:56:56.852925 3506 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/347db95b-1bb5-4912-802d-8d432587f80e-cilium-config-path\") on node \"ip-172-31-26-221\" DevicePath \"\"" Mar 7 00:56:56.857490 containerd[2021]: time="2026-03-07T00:56:56.856825016Z" level=info msg="RemoveContainer for \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\"" Mar 7 00:56:56.864130 systemd[1]: Removed slice kubepods-besteffort-pod3dc2a231_e8fd_4a59_8149_a1c884c8c509.slice - libcontainer container kubepods-besteffort-pod3dc2a231_e8fd_4a59_8149_a1c884c8c509.slice. Mar 7 00:56:56.874868 containerd[2021]: time="2026-03-07T00:56:56.874449992Z" level=info msg="RemoveContainer for \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\" returns successfully" Mar 7 00:56:56.876214 kubelet[3506]: I0307 00:56:56.876157 3506 scope.go:117] "RemoveContainer" containerID="dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7" Mar 7 00:56:56.876749 containerd[2021]: time="2026-03-07T00:56:56.876610568Z" level=error msg="ContainerStatus for \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\": not found" Mar 7 00:56:56.877101 kubelet[3506]: E0307 00:56:56.876912 3506 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\": not found" containerID="dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7" Mar 7 00:56:56.877191 kubelet[3506]: I0307 00:56:56.877098 3506 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7"} err="failed to get container status \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfc2c32174f0ce2822771512437685672c4563c9910618ba9a8b7e8caca3d5d7\": not found" Mar 7 00:56:56.877191 kubelet[3506]: I0307 00:56:56.877174 3506 scope.go:117] "RemoveContainer" containerID="00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294" Mar 7 00:56:56.881386 containerd[2021]: time="2026-03-07T00:56:56.881058572Z" level=info msg="RemoveContainer for \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\"" Mar 7 00:56:56.887584 systemd[1]: Removed slice kubepods-burstable-pod347db95b_1bb5_4912_802d_8d432587f80e.slice - libcontainer container kubepods-burstable-pod347db95b_1bb5_4912_802d_8d432587f80e.slice. Mar 7 00:56:56.888059 systemd[1]: kubepods-burstable-pod347db95b_1bb5_4912_802d_8d432587f80e.slice: Consumed 15.367s CPU time. Mar 7 00:56:56.894465 containerd[2021]: time="2026-03-07T00:56:56.894382352Z" level=info msg="RemoveContainer for \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\" returns successfully" Mar 7 00:56:56.895025 kubelet[3506]: I0307 00:56:56.894762 3506 scope.go:117] "RemoveContainer" containerID="cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50" Mar 7 00:56:56.899889 containerd[2021]: time="2026-03-07T00:56:56.899316536Z" level=info msg="RemoveContainer for \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\"" Mar 7 00:56:56.906871 containerd[2021]: time="2026-03-07T00:56:56.906808112Z" level=info msg="RemoveContainer for \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\" returns successfully" Mar 7 00:56:56.907512 kubelet[3506]: I0307 00:56:56.907179 3506 scope.go:117] "RemoveContainer" containerID="d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018" Mar 7 00:56:56.911709 containerd[2021]: time="2026-03-07T00:56:56.911637440Z" level=info msg="RemoveContainer for \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\"" Mar 7 00:56:56.922312 containerd[2021]: time="2026-03-07T00:56:56.922193576Z" level=info msg="RemoveContainer for \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\" returns successfully" Mar 7 00:56:56.922774 kubelet[3506]: I0307 00:56:56.922743 3506 scope.go:117] "RemoveContainer" containerID="453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133" Mar 7 00:56:56.925743 containerd[2021]: time="2026-03-07T00:56:56.925641200Z" level=info msg="RemoveContainer for \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\"" Mar 7 00:56:56.932494 containerd[2021]: time="2026-03-07T00:56:56.932413880Z" level=info msg="RemoveContainer for \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\" returns successfully" Mar 7 00:56:56.933022 kubelet[3506]: I0307 00:56:56.932943 3506 scope.go:117] "RemoveContainer" containerID="86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07" Mar 7 00:56:56.934876 containerd[2021]: time="2026-03-07T00:56:56.934830908Z" level=info msg="RemoveContainer for \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\"" Mar 7 00:56:56.941278 containerd[2021]: time="2026-03-07T00:56:56.941035676Z" level=info msg="RemoveContainer for \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\" returns successfully" Mar 7 00:56:56.941811 kubelet[3506]: I0307 00:56:56.941772 3506 scope.go:117] "RemoveContainer" containerID="00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294" Mar 7 00:56:56.942519 containerd[2021]: time="2026-03-07T00:56:56.942403520Z" level=error msg="ContainerStatus for \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\": not found" Mar 7 00:56:56.943015 kubelet[3506]: E0307 00:56:56.942791 3506 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\": not found" containerID="00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294" Mar 7 00:56:56.943015 kubelet[3506]: I0307 00:56:56.942842 3506 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294"} err="failed to get container status \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\": rpc error: code = NotFound desc = an error occurred when try to find container \"00de6b3f4a88720fe0c35147dca429f35168e5bfb2d9015a2401ca7aa8b77294\": not found" Mar 7 00:56:56.943015 kubelet[3506]: I0307 00:56:56.942877 3506 scope.go:117] "RemoveContainer" containerID="cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50" Mar 7 00:56:56.943651 containerd[2021]: time="2026-03-07T00:56:56.943546916Z" level=error msg="ContainerStatus for \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\": not found" Mar 7 00:56:56.943908 kubelet[3506]: E0307 00:56:56.943865 3506 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\": not found" containerID="cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50" Mar 7 00:56:56.943987 kubelet[3506]: I0307 00:56:56.943919 3506 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50"} err="failed to get container status \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\": rpc error: code = NotFound desc = an error occurred when try to find container \"cac18b25c547e6d86124c188336b4a940561f527b52759a446b3e7c06fdb8f50\": not found" Mar 7 00:56:56.943987 kubelet[3506]: I0307 00:56:56.943951 3506 scope.go:117] "RemoveContainer" containerID="d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018" Mar 7 00:56:56.944389 containerd[2021]: time="2026-03-07T00:56:56.944244836Z" level=error msg="ContainerStatus for \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\": not found" Mar 7 00:56:56.944548 kubelet[3506]: E0307 00:56:56.944458 3506 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\": not found" containerID="d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018" Mar 7 00:56:56.944548 kubelet[3506]: I0307 00:56:56.944509 3506 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018"} err="failed to get container status \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\": rpc error: code = NotFound desc = an error occurred when try to find container \"d223cea72bf75d0c78d7af65448d671bed2ac12438a6baa64b6134883d377018\": not found" Mar 7 00:56:56.944548 kubelet[3506]: I0307 00:56:56.944545 3506 scope.go:117] "RemoveContainer" containerID="453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133" Mar 7 00:56:56.945191 containerd[2021]: time="2026-03-07T00:56:56.944850152Z" level=error msg="ContainerStatus for \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\": not found" Mar 7 00:56:56.945610 kubelet[3506]: E0307 00:56:56.945458 3506 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\": not found" containerID="453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133" Mar 7 00:56:56.945610 kubelet[3506]: I0307 00:56:56.945532 3506 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133"} err="failed to get container status \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\": rpc error: code = NotFound desc = an error occurred when try to find container \"453eb7c1bb2ca87caa81457f8a1f325ddbf0b287681d6fa1ea31d8d1d2c66133\": not found" Mar 7 00:56:56.945610 kubelet[3506]: I0307 00:56:56.945565 3506 scope.go:117] "RemoveContainer" containerID="86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07" Mar 7 00:56:56.946216 containerd[2021]: time="2026-03-07T00:56:56.946130000Z" level=error msg="ContainerStatus for \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\": not found" Mar 7 00:56:56.946553 kubelet[3506]: E0307 00:56:56.946384 3506 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\": not found" containerID="86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07" Mar 7 00:56:56.946553 kubelet[3506]: I0307 00:56:56.946425 3506 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07"} err="failed to get container status \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\": rpc error: code = NotFound desc = an error occurred when try to find container \"86579aba9d6ff79649fa5805c9fff07aad5db5f893650b0c7aea1f5c6211fe07\": not found" Mar 7 00:56:57.266475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda-rootfs.mount: Deactivated successfully. Mar 7 00:56:57.266885 systemd[1]: var-lib-kubelet-pods-3dc2a231\x2de8fd\x2d4a59\x2d8149\x2da1c884c8c509-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqzrq.mount: Deactivated successfully. Mar 7 00:56:57.267157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83-rootfs.mount: Deactivated successfully. Mar 7 00:56:57.267458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83-shm.mount: Deactivated successfully. Mar 7 00:56:57.267699 systemd[1]: var-lib-kubelet-pods-347db95b\x2d1bb5\x2d4912\x2d802d\x2d8d432587f80e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnlskn.mount: Deactivated successfully. Mar 7 00:56:57.267951 systemd[1]: var-lib-kubelet-pods-347db95b\x2d1bb5\x2d4912\x2d802d\x2d8d432587f80e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 00:56:57.268103 systemd[1]: var-lib-kubelet-pods-347db95b\x2d1bb5\x2d4912\x2d802d\x2d8d432587f80e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 00:56:57.317277 kubelet[3506]: I0307 00:56:57.315980 3506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347db95b-1bb5-4912-802d-8d432587f80e" path="/var/lib/kubelet/pods/347db95b-1bb5-4912-802d-8d432587f80e/volumes" Mar 7 00:56:57.317727 kubelet[3506]: I0307 00:56:57.317697 3506 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dc2a231-e8fd-4a59-8149-a1c884c8c509" path="/var/lib/kubelet/pods/3dc2a231-e8fd-4a59-8149-a1c884c8c509/volumes" Mar 7 00:56:58.237129 sshd[5113]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:58.241638 systemd[1]: sshd@24-172.31.26.221:22-20.161.92.111:55914.service: Deactivated successfully. Mar 7 00:56:58.246474 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 00:56:58.246975 systemd[1]: session-25.scope: Consumed 1.806s CPU time. Mar 7 00:56:58.249729 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Mar 7 00:56:58.252835 systemd-logind[1993]: Removed session 25. Mar 7 00:56:58.337810 systemd[1]: Started sshd@25-172.31.26.221:22-20.161.92.111:55928.service - OpenSSH per-connection server daemon (20.161.92.111:55928). Mar 7 00:56:58.845671 sshd[5283]: Accepted publickey for core from 20.161.92.111 port 55928 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:58.848902 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:58.857415 systemd-logind[1993]: New session 26 of user core. Mar 7 00:56:58.863832 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 00:56:58.951887 ntpd[1988]: Deleting interface #11 lxc_health, fe80::b0e9:65ff:fef4:cc5b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=88 secs Mar 7 00:56:58.952437 ntpd[1988]: 7 Mar 00:56:58 ntpd[1988]: Deleting interface #11 lxc_health, fe80::b0e9:65ff:fef4:cc5b%8#123, interface stats: received=0, sent=0, dropped=0, active_time=88 secs Mar 7 00:56:59.313250 kubelet[3506]: E0307 00:56:59.311468 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-9qlq4" podUID="1aee960f-bbbb-4e34-a189-b955e20c3ef7" Mar 7 00:57:01.258785 containerd[2021]: time="2026-03-07T00:57:01.258711286Z" level=info msg="StopPodSandbox for \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\"" Mar 7 00:57:01.259387 containerd[2021]: time="2026-03-07T00:57:01.258863302Z" level=info msg="TearDown network for sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" successfully" Mar 7 00:57:01.259387 containerd[2021]: time="2026-03-07T00:57:01.258888430Z" level=info msg="StopPodSandbox for \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" returns successfully" Mar 7 00:57:01.261261 containerd[2021]: time="2026-03-07T00:57:01.259561954Z" level=info msg="RemovePodSandbox for \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\"" Mar 7 00:57:01.261261 containerd[2021]: time="2026-03-07T00:57:01.259620250Z" level=info msg="Forcibly stopping sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\"" Mar 7 00:57:01.261261 containerd[2021]: time="2026-03-07T00:57:01.259717726Z" level=info msg="TearDown network for sandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" successfully" Mar 7 00:57:01.279554 containerd[2021]: time="2026-03-07T00:57:01.279477646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:57:01.280162 containerd[2021]: time="2026-03-07T00:57:01.279884710Z" level=info msg="RemovePodSandbox \"feb48a4e1d78c08b44f42cad001c9a4f6cb160f794c7ae0d89a43c0a351f6c83\" returns successfully" Mar 7 00:57:01.281551 containerd[2021]: time="2026-03-07T00:57:01.281392438Z" level=info msg="StopPodSandbox for \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\"" Mar 7 00:57:01.281795 containerd[2021]: time="2026-03-07T00:57:01.281733898Z" level=info msg="TearDown network for sandbox \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\" successfully" Mar 7 00:57:01.282012 containerd[2021]: time="2026-03-07T00:57:01.281874970Z" level=info msg="StopPodSandbox for \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\" returns successfully" Mar 7 00:57:01.284283 containerd[2021]: time="2026-03-07T00:57:01.282813670Z" level=info msg="RemovePodSandbox for \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\"" Mar 7 00:57:01.284283 containerd[2021]: time="2026-03-07T00:57:01.282864346Z" level=info msg="Forcibly stopping sandbox \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\"" Mar 7 00:57:01.284283 containerd[2021]: time="2026-03-07T00:57:01.282962530Z" level=info msg="TearDown network for sandbox \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\" successfully" Mar 7 00:57:01.289836 containerd[2021]: time="2026-03-07T00:57:01.289707706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:57:01.290075 containerd[2021]: time="2026-03-07T00:57:01.290032270Z" level=info msg="RemovePodSandbox \"46f2f9738445381896d64e8b71d40f20d5a70fbc4a6dd15a0b41b421958cccda\" returns successfully" Mar 7 00:57:01.312163 kubelet[3506]: E0307 00:57:01.312108 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-9qlq4" podUID="1aee960f-bbbb-4e34-a189-b955e20c3ef7" Mar 7 00:57:01.497698 sshd[5283]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:01.511381 kubelet[3506]: E0307 00:57:01.510805 3506 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 00:57:01.515979 systemd[1]: sshd@25-172.31.26.221:22-20.161.92.111:55928.service: Deactivated successfully. Mar 7 00:57:01.523023 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 00:57:01.525348 systemd[1]: session-26.scope: Consumed 2.181s CPU time. Mar 7 00:57:01.529656 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Mar 7 00:57:01.539359 systemd-logind[1993]: Removed session 26. Mar 7 00:57:01.542039 systemd[1]: Created slice kubepods-burstable-podca244811_34c8_4c22_80db_ac2b3a66d350.slice - libcontainer container kubepods-burstable-podca244811_34c8_4c22_80db_ac2b3a66d350.slice. Mar 7 00:57:01.588620 kubelet[3506]: I0307 00:57:01.588548 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-cilium-cgroup\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.588620 kubelet[3506]: I0307 00:57:01.588617 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-etc-cni-netd\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.588816 kubelet[3506]: I0307 00:57:01.588660 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca244811-34c8-4c22-80db-ac2b3a66d350-cilium-config-path\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.588816 kubelet[3506]: I0307 00:57:01.588699 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-cilium-run\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.588816 kubelet[3506]: I0307 00:57:01.588738 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6b9\" (UniqueName: \"kubernetes.io/projected/ca244811-34c8-4c22-80db-ac2b3a66d350-kube-api-access-ln6b9\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.588816 kubelet[3506]: I0307 00:57:01.588778 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ca244811-34c8-4c22-80db-ac2b3a66d350-cilium-ipsec-secrets\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.588816 kubelet[3506]: I0307 00:57:01.588812 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-host-proc-sys-net\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.589062 kubelet[3506]: I0307 00:57:01.588849 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-lib-modules\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.589062 kubelet[3506]: I0307 00:57:01.588891 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-host-proc-sys-kernel\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.589062 kubelet[3506]: I0307 00:57:01.588929 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-hostproc\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.589062 kubelet[3506]: I0307 00:57:01.588964 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca244811-34c8-4c22-80db-ac2b3a66d350-hubble-tls\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.589062 kubelet[3506]: I0307 00:57:01.589003 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca244811-34c8-4c22-80db-ac2b3a66d350-clustermesh-secrets\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.589062 kubelet[3506]: I0307 00:57:01.589040 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-cni-path\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.589386 kubelet[3506]: I0307 00:57:01.589075 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-xtables-lock\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.589386 kubelet[3506]: I0307 00:57:01.589110 3506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca244811-34c8-4c22-80db-ac2b3a66d350-bpf-maps\") pod \"cilium-x7l2k\" (UID: \"ca244811-34c8-4c22-80db-ac2b3a66d350\") " pod="kube-system/cilium-x7l2k" Mar 7 00:57:01.596277 systemd[1]: Started sshd@26-172.31.26.221:22-20.161.92.111:48906.service - OpenSSH per-connection server daemon (20.161.92.111:48906). Mar 7 00:57:01.850902 containerd[2021]: time="2026-03-07T00:57:01.849388597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7l2k,Uid:ca244811-34c8-4c22-80db-ac2b3a66d350,Namespace:kube-system,Attempt:0,}" Mar 7 00:57:01.893430 containerd[2021]: time="2026-03-07T00:57:01.892949713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:57:01.893430 containerd[2021]: time="2026-03-07T00:57:01.893072641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:57:01.893430 containerd[2021]: time="2026-03-07T00:57:01.893111137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:57:01.893430 containerd[2021]: time="2026-03-07T00:57:01.893299261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:57:01.923587 systemd[1]: Started cri-containerd-a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972.scope - libcontainer container a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972. Mar 7 00:57:01.969449 containerd[2021]: time="2026-03-07T00:57:01.968976901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7l2k,Uid:ca244811-34c8-4c22-80db-ac2b3a66d350,Namespace:kube-system,Attempt:0,} returns sandbox id \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\"" Mar 7 00:57:01.981868 containerd[2021]: time="2026-03-07T00:57:01.981761017Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 00:57:02.013256 containerd[2021]: time="2026-03-07T00:57:02.011089269Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"88143d0ab484a635ecab7d6b07682d759cc92570d0b0cb9989768c64a35ef776\"" Mar 7 00:57:02.017164 containerd[2021]: time="2026-03-07T00:57:02.014050365Z" level=info msg="StartContainer for \"88143d0ab484a635ecab7d6b07682d759cc92570d0b0cb9989768c64a35ef776\"" Mar 7 00:57:02.073547 systemd[1]: Started cri-containerd-88143d0ab484a635ecab7d6b07682d759cc92570d0b0cb9989768c64a35ef776.scope - libcontainer container 88143d0ab484a635ecab7d6b07682d759cc92570d0b0cb9989768c64a35ef776. Mar 7 00:57:02.123266 containerd[2021]: time="2026-03-07T00:57:02.123069418Z" level=info msg="StartContainer for \"88143d0ab484a635ecab7d6b07682d759cc92570d0b0cb9989768c64a35ef776\" returns successfully" Mar 7 00:57:02.125718 sshd[5297]: Accepted publickey for core from 20.161.92.111 port 48906 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:02.128833 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:02.137524 systemd-logind[1993]: New session 27 of user core. Mar 7 00:57:02.146520 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 00:57:02.150503 systemd[1]: cri-containerd-88143d0ab484a635ecab7d6b07682d759cc92570d0b0cb9989768c64a35ef776.scope: Deactivated successfully. Mar 7 00:57:02.209640 containerd[2021]: time="2026-03-07T00:57:02.209156722Z" level=info msg="shim disconnected" id=88143d0ab484a635ecab7d6b07682d759cc92570d0b0cb9989768c64a35ef776 namespace=k8s.io Mar 7 00:57:02.209640 containerd[2021]: time="2026-03-07T00:57:02.209469214Z" level=warning msg="cleaning up after shim disconnected" id=88143d0ab484a635ecab7d6b07682d759cc92570d0b0cb9989768c64a35ef776 namespace=k8s.io Mar 7 00:57:02.209640 containerd[2021]: time="2026-03-07T00:57:02.209492842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:02.480009 sshd[5297]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:02.487309 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Mar 7 00:57:02.489132 systemd[1]: sshd@26-172.31.26.221:22-20.161.92.111:48906.service: Deactivated successfully. Mar 7 00:57:02.493591 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 00:57:02.496209 systemd-logind[1993]: Removed session 27. Mar 7 00:57:02.573876 systemd[1]: Started sshd@27-172.31.26.221:22-20.161.92.111:48922.service - OpenSSH per-connection server daemon (20.161.92.111:48922). Mar 7 00:57:02.899578 containerd[2021]: time="2026-03-07T00:57:02.899363066Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 00:57:02.937759 containerd[2021]: time="2026-03-07T00:57:02.937677098Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba\"" Mar 7 00:57:02.939438 containerd[2021]: time="2026-03-07T00:57:02.938622302Z" level=info msg="StartContainer for \"d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba\"" Mar 7 00:57:02.996558 systemd[1]: Started cri-containerd-d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba.scope - libcontainer container d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba. Mar 7 00:57:03.063105 containerd[2021]: time="2026-03-07T00:57:03.063031691Z" level=info msg="StartContainer for \"d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba\" returns successfully" Mar 7 00:57:03.079493 systemd[1]: cri-containerd-d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba.scope: Deactivated successfully. Mar 7 00:57:03.086844 sshd[5414]: Accepted publickey for core from 20.161.92.111 port 48922 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:03.089502 sshd[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:03.110133 systemd-logind[1993]: New session 28 of user core. Mar 7 00:57:03.116557 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 00:57:03.150585 containerd[2021]: time="2026-03-07T00:57:03.150351407Z" level=info msg="shim disconnected" id=d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba namespace=k8s.io Mar 7 00:57:03.150585 containerd[2021]: time="2026-03-07T00:57:03.150431075Z" level=warning msg="cleaning up after shim disconnected" id=d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba namespace=k8s.io Mar 7 00:57:03.150585 containerd[2021]: time="2026-03-07T00:57:03.150451595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:03.313266 kubelet[3506]: E0307 00:57:03.312246 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-9qlq4" podUID="1aee960f-bbbb-4e34-a189-b955e20c3ef7" Mar 7 00:57:03.697281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d59d2257fb0e2f77ad6a61acb042714bbdeccd9eb9d85369af81d8314233a7ba-rootfs.mount: Deactivated successfully. Mar 7 00:57:03.904809 containerd[2021]: time="2026-03-07T00:57:03.903968403Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 00:57:03.937756 containerd[2021]: time="2026-03-07T00:57:03.934897131Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61\"" Mar 7 00:57:03.937756 containerd[2021]: time="2026-03-07T00:57:03.936212895Z" level=info msg="StartContainer for \"2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61\"" Mar 7 00:57:03.998587 systemd[1]: Started cri-containerd-2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61.scope - libcontainer container 2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61. Mar 7 00:57:04.073817 containerd[2021]: time="2026-03-07T00:57:04.073622448Z" level=info msg="StartContainer for \"2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61\" returns successfully" Mar 7 00:57:04.086565 systemd[1]: cri-containerd-2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61.scope: Deactivated successfully. Mar 7 00:57:04.124963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61-rootfs.mount: Deactivated successfully. Mar 7 00:57:04.134278 containerd[2021]: time="2026-03-07T00:57:04.134085588Z" level=info msg="shim disconnected" id=2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61 namespace=k8s.io Mar 7 00:57:04.134278 containerd[2021]: time="2026-03-07T00:57:04.134160588Z" level=warning msg="cleaning up after shim disconnected" id=2c9593ed5644da1fe236002726d217909162d45ed26cb0fcdf1140d245d67a61 namespace=k8s.io Mar 7 00:57:04.134278 containerd[2021]: time="2026-03-07T00:57:04.134181804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:04.643489 kubelet[3506]: I0307 00:57:04.642421 3506 setters.go:618] "Node became not ready" node="ip-172-31-26-221" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T00:57:04Z","lastTransitionTime":"2026-03-07T00:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 00:57:04.913445 containerd[2021]: time="2026-03-07T00:57:04.912012316Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 00:57:04.940341 containerd[2021]: time="2026-03-07T00:57:04.940262104Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5\"" Mar 7 00:57:04.942498 containerd[2021]: time="2026-03-07T00:57:04.942395560Z" level=info msg="StartContainer for \"916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5\"" Mar 7 00:57:05.010574 systemd[1]: Started cri-containerd-916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5.scope - libcontainer container 916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5. Mar 7 00:57:05.145679 containerd[2021]: time="2026-03-07T00:57:05.145611685Z" level=info msg="StartContainer for \"916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5\" returns successfully" Mar 7 00:57:05.147068 systemd[1]: cri-containerd-916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5.scope: Deactivated successfully. Mar 7 00:57:05.217375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5-rootfs.mount: Deactivated successfully. Mar 7 00:57:05.223044 containerd[2021]: time="2026-03-07T00:57:05.222945433Z" level=info msg="shim disconnected" id=916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5 namespace=k8s.io Mar 7 00:57:05.223044 containerd[2021]: time="2026-03-07T00:57:05.223027225Z" level=warning msg="cleaning up after shim disconnected" id=916560f50cb00a788cd5f710ce89709f4950e6dea4a1ffe9e5792b39c52307a5 namespace=k8s.io Mar 7 00:57:05.223044 containerd[2021]: time="2026-03-07T00:57:05.223049617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:05.269145 containerd[2021]: time="2026-03-07T00:57:05.269067674Z" level=warning msg="cleanup warnings time=\"2026-03-07T00:57:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 00:57:05.311607 kubelet[3506]: E0307 00:57:05.311527 3506 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-9qlq4" podUID="1aee960f-bbbb-4e34-a189-b955e20c3ef7" Mar 7 00:57:05.917043 containerd[2021]: time="2026-03-07T00:57:05.916318697Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 00:57:05.956421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318033811.mount: Deactivated successfully. Mar 7 00:57:05.959735 containerd[2021]: time="2026-03-07T00:57:05.959573177Z" level=info msg="CreateContainer within sandbox \"a119c5993fe829cdf2f0d3fc76af8d4487381eb1d49b189bd8b983e141c5a972\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be61ecd8262b305549c5b7727abc851d1d5a0479c3cb819f049d76ab254be1fd\"" Mar 7 00:57:05.962096 containerd[2021]: time="2026-03-07T00:57:05.960534821Z" level=info msg="StartContainer for \"be61ecd8262b305549c5b7727abc851d1d5a0479c3cb819f049d76ab254be1fd\"" Mar 7 00:57:06.020668 systemd[1]: Started cri-containerd-be61ecd8262b305549c5b7727abc851d1d5a0479c3cb819f049d76ab254be1fd.scope - libcontainer container be61ecd8262b305549c5b7727abc851d1d5a0479c3cb819f049d76ab254be1fd. Mar 7 00:57:06.084980 containerd[2021]: time="2026-03-07T00:57:06.084919598Z" level=info msg="StartContainer for \"be61ecd8262b305549c5b7727abc851d1d5a0479c3cb819f049d76ab254be1fd\" returns successfully" Mar 7 00:57:06.900263 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Mar 7 00:57:11.207684 systemd-networkd[1855]: lxc_health: Link UP Mar 7 00:57:11.214857 (udev-worker)[6150]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:57:11.219022 systemd-networkd[1855]: lxc_health: Gained carrier Mar 7 00:57:11.909133 kubelet[3506]: I0307 00:57:11.909014 3506 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x7l2k" podStartSLOduration=10.908994563 podStartE2EDuration="10.908994563s" podCreationTimestamp="2026-03-07 00:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:57:06.958621722 +0000 UTC m=+126.008205699" watchObservedRunningTime="2026-03-07 00:57:11.908994563 +0000 UTC m=+130.958578516" Mar 7 00:57:12.393553 systemd-networkd[1855]: lxc_health: Gained IPv6LL Mar 7 00:57:14.582995 systemd[1]: run-containerd-runc-k8s.io-be61ecd8262b305549c5b7727abc851d1d5a0479c3cb819f049d76ab254be1fd-runc.Api2y4.mount: Deactivated successfully. Mar 7 00:57:14.952181 ntpd[1988]: Listen normally on 14 lxc_health [fe80::ecb7:81ff:fe90:cfde%14]:123 Mar 7 00:57:14.952906 ntpd[1988]: 7 Mar 00:57:14 ntpd[1988]: Listen normally on 14 lxc_health [fe80::ecb7:81ff:fe90:cfde%14]:123 Mar 7 00:57:17.025395 sshd[5414]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:17.033930 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. Mar 7 00:57:17.034937 systemd[1]: sshd@27-172.31.26.221:22-20.161.92.111:48922.service: Deactivated successfully. Mar 7 00:57:17.041545 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 00:57:17.050339 systemd-logind[1993]: Removed session 28. Mar 7 00:57:31.187744 systemd[1]: cri-containerd-378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68.scope: Deactivated successfully. Mar 7 00:57:31.189442 systemd[1]: cri-containerd-378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68.scope: Consumed 5.876s CPU time, 20.1M memory peak, 0B memory swap peak. Mar 7 00:57:31.229521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68-rootfs.mount: Deactivated successfully. Mar 7 00:57:31.238749 containerd[2021]: time="2026-03-07T00:57:31.238508235Z" level=info msg="shim disconnected" id=378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68 namespace=k8s.io Mar 7 00:57:31.239501 containerd[2021]: time="2026-03-07T00:57:31.238824207Z" level=warning msg="cleaning up after shim disconnected" id=378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68 namespace=k8s.io Mar 7 00:57:31.239501 containerd[2021]: time="2026-03-07T00:57:31.238849035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:31.261281 containerd[2021]: time="2026-03-07T00:57:31.260506827Z" level=warning msg="cleanup warnings time=\"2026-03-07T00:57:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 00:57:31.994149 kubelet[3506]: I0307 00:57:31.992505 3506 scope.go:117] "RemoveContainer" containerID="378f5a128f3aba8db2e00b786d5a5565e8fa8e17a35b848025d1236d3664dc68" Mar 7 00:57:31.999910 containerd[2021]: time="2026-03-07T00:57:31.999644778Z" level=info msg="CreateContainer within sandbox \"b4622f446bf95b4e1db27991eb64990b493ba66f1f5140ac7059723ec1fd56b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 00:57:32.029274 containerd[2021]: time="2026-03-07T00:57:32.029187651Z" level=info msg="CreateContainer within sandbox \"b4622f446bf95b4e1db27991eb64990b493ba66f1f5140ac7059723ec1fd56b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a468982544266e724d7b5143ffd66b26dcb857098bd03961e03f4e79dd9d3996\"" Mar 7 00:57:32.030325 containerd[2021]: time="2026-03-07T00:57:32.030146379Z" level=info msg="StartContainer for \"a468982544266e724d7b5143ffd66b26dcb857098bd03961e03f4e79dd9d3996\"" Mar 7 00:57:32.099544 systemd[1]: Started cri-containerd-a468982544266e724d7b5143ffd66b26dcb857098bd03961e03f4e79dd9d3996.scope - libcontainer container a468982544266e724d7b5143ffd66b26dcb857098bd03961e03f4e79dd9d3996. Mar 7 00:57:32.171073 containerd[2021]: time="2026-03-07T00:57:32.170971251Z" level=info msg="StartContainer for \"a468982544266e724d7b5143ffd66b26dcb857098bd03961e03f4e79dd9d3996\" returns successfully" Mar 7 00:57:32.229577 systemd[1]: run-containerd-runc-k8s.io-a468982544266e724d7b5143ffd66b26dcb857098bd03961e03f4e79dd9d3996-runc.4umePd.mount: Deactivated successfully. Mar 7 00:57:34.586874 kubelet[3506]: E0307 00:57:34.586790 3506 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-221?timeout=10s\": context deadline exceeded" Mar 7 00:57:36.831666 systemd[1]: cri-containerd-854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29.scope: Deactivated successfully. Mar 7 00:57:36.832736 systemd[1]: cri-containerd-854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29.scope: Consumed 5.118s CPU time, 14.0M memory peak, 0B memory swap peak. Mar 7 00:57:36.876074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29-rootfs.mount: Deactivated successfully. Mar 7 00:57:36.884554 containerd[2021]: time="2026-03-07T00:57:36.884460959Z" level=info msg="shim disconnected" id=854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29 namespace=k8s.io Mar 7 00:57:36.884554 containerd[2021]: time="2026-03-07T00:57:36.884540483Z" level=warning msg="cleaning up after shim disconnected" id=854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29 namespace=k8s.io Mar 7 00:57:36.885255 containerd[2021]: time="2026-03-07T00:57:36.884562275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:37.010332 kubelet[3506]: I0307 00:57:37.010268 3506 scope.go:117] "RemoveContainer" containerID="854e57b5dcb1d454841e9388118c1451dced2348b422d4e7d58fc199053b7f29" Mar 7 00:57:37.013686 containerd[2021]: time="2026-03-07T00:57:37.013500463Z" level=info msg="CreateContainer within sandbox \"daf85e0678a32e18c876437567296c084244a46dbc4dcecb53f24a197a92bacd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 00:57:37.044181 containerd[2021]: time="2026-03-07T00:57:37.043969903Z" level=info msg="CreateContainer within sandbox \"daf85e0678a32e18c876437567296c084244a46dbc4dcecb53f24a197a92bacd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6875613d824bde51b7cedbff76d57167a878880060a46a4c79723c53b6cce746\"" Mar 7 00:57:37.045028 containerd[2021]: time="2026-03-07T00:57:37.044966791Z" level=info msg="StartContainer for \"6875613d824bde51b7cedbff76d57167a878880060a46a4c79723c53b6cce746\"" Mar 7 00:57:37.099575 systemd[1]: Started cri-containerd-6875613d824bde51b7cedbff76d57167a878880060a46a4c79723c53b6cce746.scope - libcontainer container 6875613d824bde51b7cedbff76d57167a878880060a46a4c79723c53b6cce746. Mar 7 00:57:37.164961 containerd[2021]: time="2026-03-07T00:57:37.164863508Z" level=info msg="StartContainer for \"6875613d824bde51b7cedbff76d57167a878880060a46a4c79723c53b6cce746\" returns successfully" Mar 7 00:57:44.587267 kubelet[3506]: E0307 00:57:44.587062 3506 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.221:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-221?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"